Posted by Hee Jung, Developer Relations Group Supervisor / Soonson Kwon, Developer Relations Program Supervisor
ML in Motion is a digital occasion to gather and share cool and helpful machine studying (ML) use instances that leverage a number of Google ML merchandise. That is the primary run of an ML use case marketing campaign by the ML Developer Packages workforce.
Allow us to announce the winners proper now, proper right here. They’ve showcased sensible makes use of of ML, and the way ML was tailored to actual life conditions. We hope these initiatives can spark new utilized ML challenge concepts and supply alternatives for ML group leaders to debate ML use instances.
4 Winners of “ML in Motion” are:
Detecting Meals High quality with Raspberry Pi and TensorFlow
By George Soloupis, ML Google Developer Professional (Greece)
This challenge helps individuals with odor impairment by figuring out meals degradation. The concept got here all of a sudden when a pal revealed that he has no sense of odor as a result of a motorbike crash. Even with experiences attending a whole lot of IT conferences, this difficulty was unaddressed and the facility of machine studying is one thing we may depend on. Therefore the purpose. It’s to create a prototype that’s inexpensive, correct and usable by individuals with minimal data of computer systems.
The fundamental setting of the meals high quality detection is that this. Raspberry Pi collects information from air sensors over time through the meals degradation course of. This single board pc was very helpful! With the GUI, it’s simple to execute Python scripts and see the outcomes on display. Eight sensors collected information of the chemical parts equivalent to NH3, H2s, O3, CO, and CH4. After working the prototype for sooner or later, classes have been set following the outcomes. The primary hours of the meals out of the fridge as “good” and the remainder as “dangerous”. Then the dataset was evaluated with the assistance of TensorFlow and the inference was executed with TensorFlow Lite.
Since there have been no open supply prototypes on the market with comparable targets, it was a whole journey. Sensors on PCBs and standalone sensors have been used to get the perfect combination of accuracy, stability and sensitivity. A logic stage converter has been used to attenuate using resistors, and capacitors have been positioned for stability. And the consequence, a compact prototype! The Raspberry Pi may connect immediately on with slots for eight sensors. It’s developed in such a means that sensors may be changed at any time. Customers can experiment with totally different sensors. And the inference time values are despatched by means of the bluetooth to a cell gadget. As an finish consequence a consumer with no superior technical data will have the ability to see meals high quality on an app constructed on Android (Kotlin).
Reference: Github, extra to learn
* This challenge is supported by Google Affect Fund.
Election Watch: Making use of ML in Analyzing Elections Discourse and Citizen Participation in Nigeria
By Victor Dibia, ML Google Developer Professional (USA)
This challenge explores using GCP instruments in ingesting, storing and analyzing information on citizen participation and election discourse in Nigeria. It started on the premise that the proliferation of social media interactions supplies an fascinating lens to review human habits, and ask essential questions on election discourse in Nigeria in addition to interrogate social/demographic questions.
It’s primarily based on information collected from twitter between September 2018 to March 2019 (tweets geotagged to Nigeria and tweets containing election associated key phrases). General, the information set incorporates 25.2 million tweets and retweets, 12.6 million unique tweets, 8.6 million geotagged tweets and three.6 million tweets labeled (utilizing an ML mannequin) as political.
By analyzing election discourse, we will study just a few essential issues together with – points that drive election discourse, how social media was utilized by candidates, and the way participation was distributed throughout geographic areas within the nation. Lastly, in a rustic like Nigeria the place up to date demographics information is missing (e.g., on group constructions, wealth distribution and many others), this challenge reveals how social media can be utilized as a surrogate to deduce relative statistics (e.g., existence of diaspora communities primarily based on election dialogue and wealth distribution primarily based on gadget sort utilization throughout the nation).
Knowledge for the challenge was collected utilizing python scripts that wrote tweets from the Twitter streaming api (matching sure standards) to BigQuery. BigQuery queries have been then used to generate combination datasets used for visualizations/evaluation and coaching machine studying fashions (political textual content classification fashions to label political textual content and multi class classification fashions to label common discourse). The fashions have been constructed utilizing Tensorflow 2.0 and skilled on Colab notebooks powered by GCP GPU compute VMs.
References: Election Watch web site, ML fashions descriptions one, two
Bioacoustic Sound Detector (To establish chicken calls in soundscapes)
By Usha Rengaraju, TFUG Organizer (India)
“Visionary Perspective Plan (2020-2030) for the conservation of avian variety, their ecosystems, habitats and landscapes within the nation” proposed by the Indian authorities to assist in the conservation of birds and their habitats impressed me to take up this challenge.
Extinction of chicken species is an rising world concern because it has a huge effect on meals chains. Bioacoustic monitoring can present a passive, low labor, and cost-effective technique for learning endangered chicken populations. Latest advances in machine studying have made it potential to mechanically establish chicken songs for widespread species with ample coaching information. This innovation makes it simpler for researchers and conservation practitioners to precisely survey inhabitants developments and so they’ll have the ability to commonly and extra successfully consider threats and alter their conservation actions.
This challenge is an implementation of a Bioacoustic monitor utilizing Masked Autoencoders in TensorFlow and Cloud TPUs. The challenge will likely be introduced as a browser primarily based software utilizing Flask. The deep studying prototype can course of steady audio information after which acoustically acknowledge the species.
The purpose of the challenge after I began was to construct a fundamental prototype for monitoring of uncommon chicken species in India. In future I want to develop the challenge to watch different endangered species as properly.
References: Kaggle Pocket book, Colab Pocket book, Github, the dataset and extra to learn
Persona Labs’ Digital Personas
By Martin Andrews and Sam Witteveen, ML Google Developer Specialists (Singapore)
The elements required to make the Personas work successfully embrace dynamic face fashions, expression technology fashions, Textual content-to-Speech (TTS), dialog backend(s) and Speech Recognition (ASR). A lot of this was constructed on GCP, with GPU VMs operating the (many) Deep Studying fashions and mixing the outputs into dynamic WebRTC video that streams to customers by way of a browser front-end.
A lot of the earlier years’ work focussed on making the Personas’ faces behave in a life-like means, whereas ensuring that the general latency (i.e. the time between the Persona listening to the consumer asking a query, to their lips beginning the response) is saved low, and the rendering of particular person photographs matches the 25 frames-per-second video charge required. As you may think, there have been many Deep Studying modeling challenges, coupled with arduous engineering points to beat.
When it comes to backend applied sciences, Google Cloud GPUs have been used to coach the Deep Studying fashions (constructed utilizing TensorFlow/TFLite, PyTorch/ONNX & extra not too long ago JAX/Flax), and the real-time serving is finished by Nvidia T4 GPU-enabled VMs, launched as required. Google ASR is presently used as a streaming backend for speech recognition, and Google’s WaveNet TTS is used when multilingual TTS is required. The system additionally makes use of Google’s serverless stack with CloudRun and Cloud Features being utilized in a few of the dialog backends.
Go to the Persona’s web site (linked beneath) and you may see movies that display a number of elements : What the Personas appear like; their Multilingual functionality; potential purposes; and many others. Nonetheless, the movies can’t actually display what the interactivity ‘seems like’. For that, it’s greatest to get a dwell demo from Sam and Martin – and see what real-time Deep Studying mannequin technology seems like!
Reference: The Persona Labs web site