March 29, 2024

Web and Technology News

Hitting the Books: Why a Dartmouth professor coined the term ‘artificial intelligence’

If the Wu-Tang produced it in ’23 instead of ’93, they’d have called it D.R.E.A.M. — because data rules everything around me. Where once our society brokered power based on strength of our arms and purse strings, the modern world is driven by data empo…

Mozilla made a Firefox plugin for offline translation

Mozilla has created a translation plugin for Firefox that works offline. Firefox Translations will need to download some files the first time you convert text in a specific language. However, it will be able to use your system’s resources to handle the translation, rather than sending the information to a data center for cloud processing.

The plugin emerged as a result of Mozilla’s work with the European Union-funded Project Bergamot. Others involved include the University of Edinburgh, Charles University, University of Sheffield and University of Tartu. The goal was to develop neural machine tools to help Mozilla create an offline translation option. “The engines, language models and in-page translation algorithms would need to reside and be executed entirely in the user’s computer, so none of the data would be sent to the cloud, making it entirely private,” Mozilla said.

One of the big limitations of the plugin as things stand is that it can only handle translations between English and 12 other languages, according to TechCrunch. For now, Firefox Translations supports Spanish, Bulgarian, Czech, Estonian, German, Icelandic, Italian, Norwegian Bokmal and Nynorsk, Persian, Portuguese and Russian.

Mozilla and its partners on the project have created a training pipeline through which volunteers can assist out by helping train new models so more languages can be added. They’re looking for feedback on existing models too, so Firefox Translations is very much a work in progress.

For the time being, though, the plugin can’t hold a candle to the 133 languages that Google Translate supports. Apple and Google both have mobile apps that can handle offline translations as well.

On the surface, it’s a little odd that a browser, which is by definition used to access the web, would need an offline translation option. But translating text on your device and avoiding the need to transfer it to and from a data center could be a boon for privacy and security.

Manara gets $3M to grow tech talent pool in the Middle East and North Africa

Edtech startup, Manara, has raised $3 million in pre-seed funding for its cohort-based training ­­platform geared towards growing the tech-talent pool in the Middle East and North Africa (MENA) region. Manara fashions itself as a social impact edtech startup offering training in computer science to anyone that qualifies for the program. While its students do […]

Google begins the rollout of Play Store safety listings

Starting today, you’ll start seeing a new section within Play Store listings that show information on how apps collect, store and share data. Google first announced the feature in May 2021 and gave us a glimpse of what it would look like in July. In the data safety section, you won’t only see what kind of data the app will collect, but also if the app needs that data to function and if data collection is optional. It will also show why a specific piece of information is collected and whether the developer is sharing your data with third parties.

The developer can also add information on what security measures they practice, such as if they encrypt data in transit and whether you can ask them to delete your information. In addition, the section will show whether an app has validated their security practices against a global standard. And, for parents and guardians of young kids, it can also show whether an app is suitable for children. 

Google
Google

Google says it’s rolling out the feature gradually, and the section will start showing up for you in the coming weeks if you don’t see it immediately. Take note that the tech giant is giving developers until July 20th to have a data section in place, so some apps might still not have one even if you’re already seeing the feature on other listings. 

Google wants devices to know when you’re paying attention

Google has been working on a “new interaction language” for years, and today it’s sharing a peek at what it’s developed so far. The company is showcasing a set of movements it’s defined in its new interaction language in the first episode of a new series called In the lab with Google ATAP. That acronym stands for Advanced Technology and Projects, and it’s Google’s more-experimental division that the company calls its “hardware invention studio.

The idea behind this “interaction language” is that the machines around us could be more intuitive and perceptive of our desire to interact with them by better understanding our nonverbal cues. “The devices that surround us… should feel like a best friend,” senior interaction designer at ATAP Lauren Bedal told Engadget. “They should have social grace.”

Specifically (so far, anyway), ATAP is analyzing our movements (as opposed to vocal tones or facial expressions) to see if we’re ready to engage, so devices know when to remain in the background instead of bombarding us with information. The team used the company’s Soli radar sensor to detect the proximity, direction and pathways of people around it. Then, it parsed that data to determine if someone is glancing at, passing, approaching or turning towards the sensor. 

Google formalized this set of four movements, calling them Approach, Glance, Turn and Pass. These actions can be used as triggers for commands or reactions on things like smart displays or other types of ambient computers. If this sounds familiar, it’s because some of these gestures already work on existing Soli-enabled devices. The Pixel 4, for example, had a feature called Motion Sense that will snooze alarms when you wave at it, or wake the phone if it detected your hand coming towards it. Google’s Nest Hub Max used its camera to see when you’ve raised your open palm, and will pause your media playback in response.

Approach feels similar to existing implementations. It allows devices to tell when you (or a body part) are getting closer, so they can bring up information you might be near enough to see. Like the Pixel 4, the Nest Hub uses a similar approach when it knows you’re close by, pulling up your upcoming appointments or reminders. It’ll also show touch commands on a countdown screen if you’re near, and switch to larger, easy-to-read font when you’re further away.

While Glance may seem like it overlaps with Approach, Bedal explained that it can be for understanding where a person’s attention is when they’re using multiple devices. “Say you’re on a phone call with someone and you happen to glance at another device in the house,” she said. “Since we know you may have your attention on another device, we can offer a suggestion to maybe transfer your conversation to a video call.” Glance can also be used to quickly display a snippet of information.

An animation showing how Google's proposed interaction language works. This is an example of the Glance action, where a man looks at a display to his right, and its screen shows a black square reacting in response.
Google

What’s less familiar are Turn and Pass. “With turning towards and away, we can allow devices to help automate repetitive or mundane tasks,” Bedal said. It can be used to determine when you’re ready for the next step in a multi-stage process, like following an onscreen recipe, or something repetitive, like starting and stopping a video. Pass, meanwhile, tells the device you’re not ready to engage.

It’s clear that Approach, Pass, Turn and Glance build on what Google’s implemented in bits and pieces into its products over the years. But the ATAP team also played with combining some of these actions, like passing and glancing or approaching and glancing, which is something we’ve yet to see much of in the real world. 

For all this to work well, Google’s sensors and algorithms need to be incredibly adept not only at recognizing when you’re making a specific action, but also when you’re not. Inaccurate gesture recognition can turn an experience that’s meant to be helpful into one that’s incredibly frustrating. 

ATAP’s head of design Leonardo Giusti said “That’s the biggest challenge we have with these signals.” He said that with devices that are plugged in, there is more power available to run more complex algorithms than on a mobile device. Part of the effort to make the system more accurate is collecting more data to train machine learning algorithms on, including the correct actions as well as similar but incorrect ones (so they also learn what not to accept). 

An animation showing one of Google's movements in its new interaction language. The example in this animation is
Google

“The other approach to mitigate this risk is through UX design,” Giusti said. He explained that the system can offer a suggestion rather than trigger a completely automated response, to allow users to confirm the right input rather than act on a potentially inaccurate gesture. 

Still, it’s not like we’re going to be frustrated by Google devices misinterpreting these four movements of ours in the immediate future. Bedal pointed out “What we’re working on is purely research. We’re not focusing on product integration.” And to be clear, Google is sharing this look at the interaction language as part of a video series it’s publishing. Later episodes of In the lab with ATAP will cover other topics beyond this new language, and Giusti said it’s meant to “give people an inside look into some of the research that we are exploring.”

But it’s easy to see how this new language can eventually find its way into the many things Google makes. The company’s been talking about its vision for a world of “ambient computing” for years, where it envisions various sensors and devices embedded into the many surfaces around us, ready to anticipate and respond to our every need. For a world like that to not feel intrusive or invasive, there are many issues to sort out (protecting user privacy chief among them). Having machines that know when to stay away and when to help is part of that challenge.

Bedal, who’s also a professional choreographer, said “We believe that these movements are really hinting to a future way of interacting with computers that feels invisible by leveraging the natural ways that we move.”

She added, “By doing so, we can do less and computers can… operate in the background, only helping us in the right moments.” 

Google wants devices to know when you’re paying attention

Google has been working on a “new interaction language” for years, and today it’s sharing a peek at what it’s developed so far. The company is showcasing a set of movements it’s defined in its new interaction language in the first episode of a new series called In the lab with Google ATAP. That acronym stands for Advanced Technology and Projects, and it’s Google’s more-experimental division that the company calls its “hardware invention studio.

The idea behind this “interaction language” is that the machines around us could be more intuitive and perceptive of our desire to interact with them by better understanding our nonverbal cues. “The devices that surround us… should feel like a best friend,” senior interaction designer at ATAP Lauren Bedal told Engadget. “They should have social grace.”

Specifically (so far, anyway), ATAP is analyzing our movements (as opposed to vocal tones or facial expressions) to see if we’re ready to engage, so devices know when to remain in the background instead of bombarding us with information. The team used the company’s Soli radar sensor to detect the proximity, direction and pathways of people around it. Then, it parsed that data to determine if someone is glancing at, passing, approaching or turning towards the sensor. 

Google formalized this set of four movements, calling them Approach, Glance, Turn and Pass. These actions can be used as triggers for commands or reactions on things like smart displays or other types of ambient computers. If this sounds familiar, it’s because some of these gestures already work on existing Soli-enabled devices. The Pixel 4, for example, had a feature called Motion Sense that will snooze alarms when you wave at it, or wake the phone if it detected your hand coming towards it. Google’s Nest Hub Max used its camera to see when you’ve raised your open palm, and will pause your media playback in response.

Approach feels similar to existing implementations. It allows devices to tell when you (or a body part) are getting closer, so they can bring up information you might be near enough to see. Like the Pixel 4, the Nest Hub uses a similar approach when it knows you’re close by, pulling up your upcoming appointments or reminders. It’ll also show touch commands on a countdown screen if you’re near, and switch to larger, easy-to-read font when you’re further away.

While Glance may seem like it overlaps with Approach, Bedal explained that it can be for understanding where a person’s attention is when they’re using multiple devices. “Say you’re on a phone call with someone and you happen to glance at another device in the house,” she said. “Since we know you may have your attention on another device, we can offer a suggestion to maybe transfer your conversation to a video call.” Glance can also be used to quickly display a snippet of information.

An animation showing how Google's proposed interaction language works. This is an example of the Glance action, where a man looks at a display to his right, and its screen shows a black square reacting in response.
Google

What’s less familiar are Turn and Pass. “With turning towards and away, we can allow devices to help automate repetitive or mundane tasks,” Bedal said. It can be used to determine when you’re ready for the next step in a multi-stage process, like following an onscreen recipe, or something repetitive, like starting and stopping a video. Pass, meanwhile, tells the device you’re not ready to engage.

It’s clear that Approach, Pass, Turn and Glance build on what Google’s implemented in bits and pieces into its products over the years. But the ATAP team also played with combining some of these actions, like passing and glancing or approaching and glancing, which is something we’ve yet to see much of in the real world. 

For all this to work well, Google’s sensors and algorithms need to be incredibly adept not only at recognizing when you’re making a specific action, but also when you’re not. Inaccurate gesture recognition can turn an experience that’s meant to be helpful into one that’s incredibly frustrating. 

ATAP’s head of design Leonardo Giusti said “That’s the biggest challenge we have with these signals.” He said that with devices that are plugged in, there is more power available to run more complex algorithms than on a mobile device. Part of the effort to make the system more accurate is collecting more data to train machine learning algorithms on, including the correct actions as well as similar but incorrect ones (so they also learn what not to accept). 

An animation showing one of Google's movements in its new interaction language. The example in this animation is
Google

“The other approach to mitigate this risk is through UX design,” Giusti said. He explained that the system can offer a suggestion rather than trigger a completely automated response, to allow users to confirm the right input rather than act on a potentially inaccurate gesture. 

Still, it’s not like we’re going to be frustrated by Google devices misinterpreting these four movements of ours in the immediate future. Bedal pointed out “What we’re working on is purely research. We’re not focusing on product integration.” And to be clear, Google is sharing this look at the interaction language as part of a video series it’s publishing. Later episodes of In the lab with ATAP will cover other topics beyond this new language, and Giusti said it’s meant to “give people an inside look into some of the research that we are exploring.”

But it’s easy to see how this new language can eventually find its way into the many things Google makes. The company’s been talking about its vision for a world of “ambient computing” for years, where it envisions various sensors and devices embedded into the many surfaces around us, ready to anticipate and respond to our every need. For a world like that to not feel intrusive or invasive, there are many issues to sort out (protecting user privacy chief among them). Having machines that know when to stay away and when to help is part of that challenge.

Bedal, who’s also a professional choreographer, said “We believe that these movements are really hinting to a future way of interacting with computers that feels invisible by leveraging the natural ways that we move.”

She added, “By doing so, we can do less and computers can… operate in the background, only helping us in the right moments.”