Canvas Category Software : Cloud Computing : General
Our mission is to organize the world’s information and make it universally accessible and useful.
Assembly Line
Alphabet to invest $5B in Waymo, its self-driving vehicle unit
Google parent company Alphabet plans to invest $5 billion in Waymo, its unit for autonomous vehicles, over the next few years. Alphabet CTO Ruth Porat announced the news during the company’s quarterly financial results call.
Prior to this, Waymo raised $2.25 billion in its first external funding round in 2020. The company raised another $2.5 billion in 2021 in a round that included funding from Andreessen Horowitz, AutoNation, Canada Pension Plan Investment Board, Fidelity Management & Research Company and more.
The new funding round will enable Waymo to continue to build the world’s leading autonomous driving company, he said.
Again raises $43M from Google Ventures and others to turn CO2 into green chemicals
Again, a Danish climate tech startup, which turns carbon dioxide into valuable chemicals, has raised $43 million in Series A funding. The investment round was co-led by Google Ventures (which invested in ClimateX and StatusPRO) and HV Capital. Kompas VC, EIFO – Denmark’s Export and Investment Fund, ACME Capital, and Atlantic Labs also participated in the round. With this round, the total funding raised by Again accounts for $100 million, including a $47 million Horizon Europe grant for the PyroCO2 project.
The new funding will be used to build additional facilities to combat the climate crisis at scale. It will be used to build additional production capacity to deliver green chemicals to customers, and R&D to expand Again’s product portfolio and bring more molecules to market.
How AlloyDB transformed Bayer’s data operations
Migrating to AlloyDB has been transformative for our business. In our previous PostgreSQL setup, the primary writer was responsible for both write operations and replicating those changes to reader nodes. The anticipated increase in write traffic and reader count would have overwhelmed this node, leading to potential bottlenecks and increased replication lag. AlloyDB’s architecture, which utilizes a single source of truth for all nodes, significantly reduced the impact of scaling read traffic. After migrating, we saw a dramatic improvement in performance, ensuring our ability to meet growing demands and maintain consistently low replication delay. In parallel load tests, a smaller AlloyDB instance reduced response times by over 50% on average and increased throughput by 5x compared to our previous PostgreSQL solution.
By migrating to AlloyDB, we’ve ensured that our business growth won’t be hindered by database limitations, allowing us to focus on innovation. The true test of our migration came during our first peak harvest season, a time where performance is critical for product decision timelines. Due to agriculture’s seasonal nature, a delay of just a few days can postpone a product launch by an entire year. Our customers were understandably nervous, but thanks to Google Cloud and AlloyDB, the harvest season went as smoothly as we could have hoped for.
To support our data strategy, we have adopted a consistent architecture across our Google Cloud projects. For a typical project, the stack consists of Google Kubernetes Engine (GKE) hosted pods and pipelines for publishing events and analytics data. While Bayer uses Apache Kafka across teams and cloud providers for data streaming, individual teams regularly use Pub/Sub internally for messaging and event-driven architectures. Data for analytics and reporting is generally stored in BigQuery, with custom processes for materialization once it lands. By using cross-project BigQuery datasets, we are able to work with a larger, real-time user group and enhance our operational capabilities.
Heuristics on the high seas: Mathematical optimization for cargo ships
Google’s Operations Research team is proud to announce the Shipping Network Design API, which implements a new solution to the most efficient routes for shipping. Our approach scales better, enabling solutions to world-scale supply chain problems, while being faster than any known previous attempts. It is able to double the profit of a container shipper, deliver 13% more containers, and do so with 15% fewer vessels. Read on to see how we did it.
There are three components to the Liner Shipping Network Design and Scheduling Problem (LSNDSP). Network design determines the order in which vessels visit ports, network scheduling determines the times they arrive and leave, and container routing chooses the journey that containers take from origin to destination. Every container shipping company needs to solve all three challenges, but they are typically solved sequentially. Solving them all simultaneously is more difficult but is also more likely to discover better solutions.
Solutions to network design create service lines that a small set of vessels follow: for instance, sailing between eastern Asia, through the Suez canal, and to southern Europe. These service lines are published with dates, so that shippers can know when and where to have their containers ready at port.
Verse™ Secures $20.5 Million in Series A Funding led by GV to Help Organizations Reduce Electricity Costs & Emissions
Verse, whose software enables organizations to understand, plan, and manage clean energy, has raised a $20.5M Series A funding round. The investment, led by GV (Google Ventures) with participation from Coatue, CIV, and MCJ Collective, will support Verse as it scales commercial operations and develops new product capabilities to help organizations reduce emissions and lower electricity costs.
Unlocking new value in industrial automation with AI
Working with the robotics team at NVIDIA, we have successfully tested NVIDIA robotics platform technologies, including NVIDIA Isaac Manipulator foundation models for robot a grasping skill with the Intrinsic platform. This prototype features an industrial application specified by one of our partners and customers, Trumpf Machine Tools. This grasping skill, trained with 100% synthetic data generated by NVIDIA Isaac Sim, can be used to build sophisticated solutions that can perform adaptive and versatile object grasping tasks in sim and real. Instead of hard-coding specific grippers to grasp specific objects in a certain way, efficient code for a particular gripper and object is auto-generated to complete the task using the foundation model and synthetic training data.
Together with Google DeepMind, we’ve demonstrated some novel and high value methods for robotic programming and orchestration — many of which have practical applications today:
- Multi-robot motion planning with machine learning
- Learning from demonstration, applied to two-handed dexterous manipulation
- Foundation model for perception by enabling a robotic system to understand the next task and the physical objects involved requires a real-time, accurate, and semantic understanding of the environment.
Google, Microsoft, and Nucor announce a new initiative to aggregate demand to scale the adoption of advanced clean electricity technologies
Google LLC, Microsoft Corporation, and Nucor Corporation announced they will work together across the electricity ecosystem to develop new business models and aggregate their demand for advanced clean electricity technologies. These models will be designed to accelerate the development of first-of-a-kind (FOAK) and early commercial projects, including advanced nuclear, next-generation geothermal, clean hydrogen, long-duration energy storage (LDES) and others.
The companies will initially focus on proving out the demand aggregation and procurement model through advanced technology pilot projects in the United States. The companies will pilot a project delivery framework focused on three enabling levers for early commercial projects: signing offtake agreements for technologies that are still early on the cost curve, bringing a clear customer voice to policymakers and other stakeholders on broader long-term ecosystem improvements, and developing new enabling tariff structures in partnership with energy providers and utilities.
HD Hyundai, Google Cloud team up to accelerate generative AI innovation
HD Hyundai and Google Cloud have formed a strategic partnership to use the US firm’s multimodal AI model Gemini, unveiled earlier this month, across the Korean company’s core businesses, including shipbuilding, heavy machinery and energy. Under the partnership, Google Cloud will provide HD Hyundai with enterprise tools such as the Vertex AI platform to develop industry-specific AI applications. Starting in January 2024, HD Hyundai and Google Cloud will develop various AI solutions tailored to industry-specific needs and cultivate AI experts at the Korean conglomerate.
Lightmatter Accelerates Growth and Expands Photonic Chip Deployments With $155M in New Funding; Now Valued at $1.2B
Lightmatter, the leader in photonics, announced today it has raised a $155M Series C-2 led by GV (Google Ventures) and Viking Global Investors, with participation from others. With this round, Lightmatter has raised over $420 million to date and is now valued at over $1.2B. This new financing allows the company to expedite growth to meet the increasing demand for high-performance computing (HPC) from AI innovators. Lightmatter plans to expand its world-class team and office footprint, while accelerating its ability to provide customers increased performance on the most advanced AI workloads.
Lightmatter is developing photonic technologies that reconstruct how chips calculate and communicate, which can be leveraged by the biggest cloud providers, semiconductor companies, and enterprises for their computing needs. The company provides a full stack of photonics-enabled hardware and software solutions that simultaneously reduce power consumption and increase performance. This is essential for highly compute-intensive workloads such as AI, which have grown rapidly to affect every critical industry.
Lightmatter Accelerates Growth and Expands Photonic Chip Deployments With $155M in New Funding; Now Valued at $1.2B
Lightmatter, the leader in photonics, announced it has raised a $155M Series C-2 led by GV (Google Ventures) and Viking Global Investors, with participation from others. With this round, Lightmatter has raised over $420 million to date and is now valued at over $1.2B. This new financing allows the company to expedite growth to meet the increasing demand for high-performance computing (HPC) from AI innovators. Lightmatter plans to expand its world-class team and office footprint, while accelerating its ability to provide customers increased performance on the most advanced AI workloads.
Xometry Leverages Google Cloud To Accelerate The Digitization Of Manufacturing Globally
Xometry, the global AI-powered marketplace connecting enterprise buyers with suppliers of manufacturing services, today announced a partnership with Google Cloud to leverage Vertex AI to help accelerate the deployment of new auto-quote methods and models within Xometry’s AI-powered Instant Quoting Engine. Using Vertex AI, Xometry will accelerate the deployment of its instant-quoting and fulfillment capabilities to encompass the broadest and most comprehensive set of manufacturing technologies. As a result, Vertex AI will help Xometry expand the markets it serves for custom manufacturing and further advance the digitization of manufacturing globally.
Automate plant maintenance using MDE with ABAP SDK for Google Cloud
Analyzing production data at scale for huge datasets is always a challenge, especially when there’s data from multiple production facilities involved with thousands of assets in production pipelines. To help solve this challenge, our Manufacturing Data Engine is designed to help manufacturers manage end-to-end shop floor business processes.
Manufacturing Data Engine (MDE) is a scalable solution that accelerates, simplifies, and enhances the ingestion, processing, contextualization, storage, and usage of manufacturing data for monitoring, analytical, and machine learning use cases. This suite of components can help manufacturers accelerate their transformation with Google Cloud’s analytics and AI capabilities.
Broadcom’s transformation journey with Google Cloud
Since the migration to Google Cloud, we’ve eliminated 165 software test labs and saved 50% on costs by hosting most of our work on the cloud instead of relying on dedicated hardware running in Broadcom data centers.
Our collaboration with Google Cloud has also enabled Broadcom to deliver new product features faster while keeping products up to date and free of technical debt. This is a major competitive boon. Adopting Google Cloud as a scalable platform for product development, we now deliver rapid elasticity when catering to increased spikes in requests for products that can reach up to a million requests per second. Equally important, it’s helped us keep the platform secure to protect our customers’ workload and data.
Prior to migrating, Broadcom operated 50-plus data centers globally. The plan was to replace all 50-plus data centers in six months—which we did. It was crucial that we got this right because the workloads were time-sensitive and customer-sensitive, and any glitch could have a huge impact on customers.
🧠🦾 Google’s Robotic Transformer 2: More Than Meets the Eye
Google DeepMind’s Robotic Transformer 2 (RT2) is an evolution of vision language model (VLM) software. Trained on images from the web, RT2 software employs robotics datasets to manage low-level robotics control. Traditionally, VLMs have been used to combine inputs from both visual and natural language text datasets to accomplish more complex tasks. Of course, ChatGTP is at the front of this trend.
Google researchers identified a gap in how current VLMs were being applied in the robotic space. They note that current methods and approaches tend to focus on high-level robotic theory such as strategic state machine models. This leaves a void in the lower-level execution of robotic action, where the majority of control engineers execute work. Thus, Google is attempting to bring the power and benefits of VLMs down into the control engineers’ domain of programming robotics.
U. S. Steel Aims to Improve Operational Efficiencies and Employee Experiences with Google Cloud’s Generative AI
United States Steel Corporation (NYSE: X) (“U. S. Steel”) and Google Cloud today announced a new collaboration to build applications using Google Cloud’s generative artificial intelligence (“gen AI”) technology to drive efficiencies and improve employee experiences in the largest iron ore mine in North America. As a leading manufacturer engaging in gen AI with Google Cloud, U. S. Steel continues to advance its more than 100-year legacy of innovation.
The first gen AI-driven application that U. S. Steel will launch is called MineMind™ which aims to simplify equipment maintenance by providing optimal solutions for mechanical problems, saving time and money, and ultimately improving productivity. Underpinned by Google Cloud’s AI technology like Document AI and Vertex AI, MineMind™ is expected to not only improve the maintenance team’s experience by more easily bringing the information they need to their fingertips, but also save costs from more efficient use of technicians’ time and better maintained trucks. The initial phase of the launch will begin in September and will impact more than 60 haul trucks at U. S. Steel’s Minnesota Ore Operations facilities, Minntac and Keetac.
How AI is helping airlines mitigate the climate impact of contrails
🧠🦾 RT-2: New model translates vision and language into action
Robotic Transformer 2 (RT-2) is a novel vision-language-action (VLA) model that learns from both web and robotics data, and translates this knowledge into generalised instructions for robotic control.
High-capacity vision-language models (VLMs) are trained on web-scale datasets, making these systems remarkably good at recognising visual or language patterns and operating across different languages. But for robots to achieve a similar level of competency, they would need to collect robot data, first-hand, across every object, environment, task, and situation.
In our paper, we introduce Robotic Transformer 2 (RT-2), a novel vision-language-action (VLA) model that learns from both web and robotics data, and translates this knowledge into generalised instructions for robotic control, while retaining web-scale capabilities.
Energy Startup Says It Has Achieved Geothermal Tech Breakthrough
In a landmark step for enhanced geothermal technology’s potential as a dependable carbon-free energy source, startup Fervo Energy has completed a performance demonstration of its commercial pilot.
The Houston-based company wrapped up a full-scale, 30-day well test at its Project Red site in northern Nevada, which was able to generate 3.5 megawatts of electricity, according to a company statement. (One megawatt can power roughly 750 homes at once.) Project Red will connect to the grid later this year and power Google’s data centers and infrastructure throughout Nevada. It’s a part of the corporate agreement between the startup and Alphabet Inc.’s Google to develop enhanced geothermal systems.
🧠🦾 RoboCat: A self-improving robotic agent
RoboCat learns much faster than other state-of-the-art models. It can pick up a new task with as few as 100 demonstrations because it draws from a large and diverse dataset. This capability will help accelerate robotics research, as it reduces the need for human-supervised training, and is an important step towards creating a general-purpose robot.
RoboCat is based on our multimodal model Gato (Spanish for “cat”), which can process language, images, and actions in both simulated and physical environments. We combined Gato’s architecture with a large training dataset of sequences of images and actions of various robot arms solving hundreds of different tasks.
The combination of all this training means the latest RoboCat is based on a dataset of millions of trajectories, from both real and simulated robotic arms, including self-generated data. We used four different types of robots and many robotic arms to collect vision-based data representing the tasks RoboCat would be trained to perform.
SAP and Google Cloud Expand Partnership to Build the Future of Open Data and AI for Enterprises
SAP SE (NYSE SAP) and Google Cloud announced an extensive expansion of their partnership, introducing a comprehensive open data offering designed to simplify data landscapes and unleash the power of business data.
The offering enables customers to build an end-to-end data cloud that brings data from across the enterprise landscape using the SAP Datasphere solution together with Google’s data cloud, so businesses can view their entire data estates in real time and maximize value from their Google Cloud and SAP software investments.
FogLAMP on Google Cloud
🦾♻️ Robotic deep RL at scale: Sorting waste and recyclables with a fleet of robots
In “Deep RL at Scale: Sorting Waste in Office Buildings with a Fleet of Mobile Manipulators”, we discuss how we studied this problem through a recent large-scale experiment, where we deployed a fleet of 23 RL-enabled robots over two years in Google office buildings to sort waste and recycling. Our robotic system combines scalable deep RL from real-world data with bootstrapping from training in simulation and auxiliary object perception inputs to boost generalization, while retaining the benefits of end-to-end training, which we validate with 4,800 evaluation trials across 240 waste station configurations.
How BigQuery helps Leverege deliver business-critical enterprise IoT solutions at scale
Leverege IoT Stack is deployed with Google Kubernetes Engine (GKE), a fully managed kubernetes service for managing collections of microservices. Leverege uses Google Cloud Pub/Sub, a fully managed service, as the primary means of message routing for data ingestion, and Google Firebase for real-time data and user interface hosting. For long-term data storage, historical querying and analysis, and real-time insights , Leverege relies on BigQuery.
BigQuery allows Leverege to record the full volume of historical data at a low storage cost, while only paying to access small segments of data on-demand using table partitioning. For each of these examples, historical analysis using BigQuery can help identify pain points and improve operational efficiencies. They can also do so with both public datasets and private datasets. This means an auto wholesaler can expose data for specific vehicles, but not the entire dataset (i.e., no API queries). Likewise, a boat engine manufacturer can make subsets of data available to different end users.
Building a Visual Quality Control solution in Google Cloud using Vertex AI
In this blog post, we consider the problem of defect detection in packages on assembly and sorting lines. More specifically, we present a real-time visual quality control solution that is capable of tracking multiple objects (packages) on a line, analyzing each object, and evaluating the probability of a defect or damaged parcel. The solution was implemented using Google Cloud Platform (GCP) Vertex AI platforms and GCP AutoML services, and we have made the reference implementation available in our git repository. This implementation can be used as a starting point for developing custom visual quality control pipelines.
⭐ Hunting For Hardware-Related Errors In Data Centers
The data center computational errors that Google and Meta engineers reported in 2021 have raised concerns regarding an unexpected cause — manufacturing defect levels on the order of 1,000 DPPM. Specific to a single core in a multi-core SoC, these hardware defects are difficult to isolate during data center operations and manufacturing test processes. In fact, SDEs can go undetected for months because the precise inputs and local environmental conditions (temperature, noise, voltage, clock frequency) have not yet been applied.
For instance, Google engineers noted ‘an innocuous change to a low-level library’ started to give wrong answers for a massive-scale data analysis pipeline. They went on to write, “Deeper investigation revealed that these instructions malfunctioned due to manufacturing defects, in a way that could only be detected by checking the results of these instructions against the expected results; these are ‘silent’ corrupt execution errors, or CEEs.”
Engineers at Google further confirmed their need for internal data, “Our understanding of CEE impacts is primarily empirical. We have observations of the form, ‘This code has miscomputed (or crashed) on that core.’ We can control what code runs on what cores, and we partially control operating conditions (frequency, voltage, temperature). From this, we can identify some mercurial cores. But because we have limited knowledge of the detailed underlying hardware, and no access to the hardware-supported test structures available to chip makers, we cannot infer much about root causes.”
RT-1: Robotics Transformer for Real-World Control at Scale
Major recent advances in multiple subfields of machine learning (ML) research, such as computer vision and natural language processing, have been enabled by a shared common approach that leverages large, diverse datasets and expressive models that can absorb all of the data effectively. Although there have been various attempts to apply this approach to robotics, robots have not yet leveraged highly-capable models as well as other subfields.
Several factors contribute to this challenge. First, there’s the lack of large-scale and diverse robotic data, which limits a model’s ability to absorb a broad set of robotic experiences. Data collection is particularly expensive and challenging for robotics because dataset curation requires engineering-heavy autonomous operation, or demonstrations collected using human teleoperations. A second factor is the lack of expressive, scalable, and fast-enough-for-real-time-inference models that can learn from such datasets and generalize effectively.
To address these challenges, we propose the Robotics Transformer 1 (RT-1), a multi-task model that tokenizes robot inputs and outputs actions (e.g., camera images, task instructions, and motor commands) to enable efficient inference at runtime, which makes real-time control feasible. This model is trained on a large-scale, real-world robotics dataset of 130k episodes that cover 700+ tasks, collected using a fleet of 13 robots from Everyday Robots (EDR) over 17 months. We demonstrate that RT-1 can exhibit significantly improved zero-shot generalization to new tasks, environments and objects compared to prior techniques. Moreover, we carefully evaluate and ablate many of the design choices in the model and training set, analyzing the effects of tokenization, action representation, and dataset composition. Finally, we’re open-sourcing the RT-1 code, and hope it will provide a valuable resource for future research on scaling up robot learning.
Using AI to increase asset utilization and production uptime for manufacturers
Google Cloud created purpose-built tools and solutions to organize manufacturing data, make it accessible and useful, and help manufacturers to quickly take significant steps on this journey by reducing the time to value. In this post, we will explore a practical example of how manufacturers can use Google Cloud manufacturing solutions to train, deploy and extract value from ML-enabled capabilities to predict asset utilization and maintenance needs. The first step to a successful machine learning project is to unify necessary data in a common repository. For this, we will use Manufacturing Connect, the factory edge platform co-developed with Litmus, to connect to manufacturing assets and stream the asset telemetries to Pub/Sub.
The following scenario is based on a hypothetical company, Cymbal Materials. This company is a factitious discrete manufacturing company that runs 50+ factories in 10+ countries. 90% of Cymbal Materials manufacturing processes involve milling, which are accomplished using industrial computer numerical control (CNC) milling machines. Although their factories implement routine maintenance checklists, there are unplanned and unknown failures that happen occasionally. However, many of the Cymbal Materials factory workers lack the experience to identify and troubleshoot failures due to labor shortage and high turnover rate in their factories. Hence, Cymbal Materials is working with Google Cloud to build a machine learning model that can identify and analyze failures on top of Manufacturing Connect, Manufacturing Data Engine, and Vertex AI.
Intro to deep learning to track deforestation in supply chains
In my experience, I have observed that it’s common in machine learning to surrender to the process of experimenting with many different algorithms in a trial and error fashion, until you get the desired result. My peers and I at Google have a People and Planet AI YouTube series where we talk about how to train and host a model for environmental purposes using Google Cloud and Google Earth Engine. Our focus is inspiring people to use deep learning, and if we could rename the series, we would call it AI for Minimalists since we would recommend artificial neural networks for most of our use cases. And so in this episode we give an overview of what deep learning is and how you can use it for tracking deforestation in supply chains.
The art of effective factory data visualization
Anomaly detection in industrial IoT data using Google Vertex AI: A reference notebook
Modern manufacturing, transportation, and energy companies routinely operate thousands of machines and perform hundreds of quality checks at different stages of their production and distribution processes. Industrial sensors and IoT devices enable these companies to collect comprehensive real-time metrics across equipment, vehicles, and produced parts, but the analysis of such data streams is a challenging task.
We start with a discussion of how the health monitoring problem can be converted into standard machine learning tasks and what pitfalls one should be aware of, and then implement a reference Vertex AI pipeline for anomaly detection. This pipeline can be viewed as a starter kit for quick prototyping of IoT anomaly detection solutions that can be further customized and extended to create production-grade platforms.
Table Tennis: A Research Platform for Agile Robotics
Robot learning has been applied to a wide range of challenging real world tasks, including dexterous manipulation, legged locomotion, and grasping. It is less common to see robot learning applied to dynamic, high-acceleration tasks requiring tight-loop human-robot interactions, such as table tennis. There are two complementary properties of the table tennis task that make it interesting for robotic learning research. First, the task requires both speed and precision, which puts significant demands on a learning algorithm. At the same time, the problem is highly-structured (with a fixed, predictable environment) and naturally multi-agent (the robot can play with humans or another robot), making it a desirable testbed to investigate questions about human-robot interaction and reinforcement learning. These properties have led to several research groups developing table tennis research platforms.
How Boeing overcame their on-premises implementation challenges with data & AI
CircularNet: Reducing waste with Machine Learning
The facilities where our waste and recyclables are processed are called “Material Recovery Facilities” (MRFs). Each MRF processes tens of thousands of pounds of our societal “waste” every day, separating valuable recyclable materials like metals and plastics from non-recyclable materials. A key inefficiency within the current waste capture and sorting process is the inability to identify and segregate waste into high quality material streams. The accuracy of the sorting directly determines the quality of the recycled material; for high-quality, commercially viable recycling, the contamination levels need to be low. Even though the MRFs use various technologies alongside manual labor to separate materials into distinct and clean streams, the exceptionally cluttered and contaminated nature of the waste stream makes automated waste detection challenging to achieve, and the recycling rates and the profit margins stay at undesirably low levels.
Enter what we call “CircularNet”, a set of models that lowers barriers to AI/ML tech for waste identification and all the benefits this new level of transparency can offer. Our goal with CircularNet is to develop a robust and data-efficient model for waste/recyclables detection, which can support the way we identify, sort, manage, and recycle materials across the waste management ecosystem.
Synopsys helps semiconductor designers accelerate chip design and development on Google Cloud
EDA software is a large consumer of high performance computing capacity in the cloud. With the release of Synopsys Cloud bring-your-own-cloud (BYOC) solution on Google Cloud, chip designers can now scale their Google Cloud infrastructure with Synopsys’s leading EDA tools under the flexible FlexEDA pay-per-use model and access unlimited EDA software license availability on-demand by the hour or minute.
Lufthansa increases on-time flights by wind forecasting with Google Cloud ML
The magnitude and direction of wind significantly impacts airport operations, and Lufthansa Group Airlines are no exception. A particularly troublesome kind is called BISE: it is a cold, dry wind that blows from the northeast to southwest in Switzerland, through the Swiss Plateau. Its effects on flight schedules can be severe, such as forcing planes to change runways, which can create a chain reaction of flight delays and possible cancellations. In Zurich Airport, in particular, BISE can potentially reduce capacity by up to 30%, leading to further flight delays and cancellations, and to millions in lost revenue for Lufthansa (as well as dissatisfaction among their passengers).
Machine learning (ML) can help airports and airlines to better anticipate and manage these types of disruptive weather events. In this blog post, we’ll explore an experiment Lufthansa did together with Google Cloud and its Vertex AI Forecast service, accurately predicting BISE hours in advance, with more than 40% relative improvement in accuracy over internal heuristics, all within days instead of the months it often takes to do ML projects of this magnitude and performance.
How Volkswagen and Google Cloud are using machine learning to design more energy-efficient cars
Volkswagen strives to design beautiful, performant, and energy efficient vehicles. This entails an iterative process where designers go through many design drafts, evaluating each, integrating the feedback, and refining. For example, a vehicle’s drag coefficient—its resistance to air—is one of the most important factors of energy efficiency. Thus, getting estimates of the drag coefficient for several designs helps the designers experiment and converge toward more energy-efficient solutions. The cheaper and faster this feedback loop is, the more it enables the designers.
This joint research effort between Volkswagen and Google has produced promising results with the help of the Vertex AI platform. In this first milestone, the team was able to successfully bring recent AI research results a step closer to practical application for car design. This first iteration of the algorithm can produce a drag coefficient estimate with an average error of just 4%, within a second. An average error of 4%, while not quite as accurate as a physical wind tunnel test, can be used to narrow a large selection of design candidates to a small shortlist. And given how quickly the estimates appear, we have made a substantial improvement on the existing methods that take days or weeks. With the algorithm that we have developed, designers can run more efficiency tests, submit more candidates, and iterate towards richer, more effective designs in just a small fraction of the time previously required.
Towards Helpful Robots: Grounding Language in Robotic Affordances
In “Do As I Can, Not As I Say: Grounding Language in Robotic Affordances”, we present a novel approach, developed in partnership with Everyday Robots, that leverages advanced language model knowledge to enable a physical agent, such as a robot, to follow high-level textual instructions for physically-grounded tasks, while grounding the language model in tasks that are feasible within a specific real-world context. We evaluate our method, which we call PaLM-SayCan, by placing robots in a real kitchen setting and giving them tasks expressed in natural language. We observe highly interpretable results for temporally-extended complex and abstract tasks, like “I just worked out, please bring me a snack and a drink to recover.” Specifically, we demonstrate that grounding the language model in the real world nearly halves errors over non-grounded baselines. We are also excited to release a robot simulation setup where the research community can test this approach.
Aramco and Cognite join forces in new data venture
Aramco and Cognite, a global leader in industrial software, have launched CNTXT, a joint venture based in the Kingdom of Saudi Arabia. Headquartered in Riyadh, CNTXT aims to support the Kingdom’s industrial digitalization, and the wider MENA region.
CNTXT will provide digital transformation services enabled by advanced cloud solutions and leading industrial software. These solutions and services aim to help public and private sector companies to future-proof their data infrastructure, increase revenue, cut costs and reduce risks while enhancing operational sustainability and security. CNTXT is Google Cloud’s reseller for cloud solutions in the Kingdom and the exclusive reseller of Cognite Data Fusion in MENA region. Additionally, Google Cloud is expected to launch a “Center of Excellence” later this year to provide training to developers and business leaders in how to use cloud technologies.
Maersk Mobile: All the Way with Flutter
The Maersk App helps our customers to follow the progress of their shipment in real-time. In late 2017, the team built the app on native platforms (Android and iOS), with a very small group of engineers compared to the size of the web teams. Keeping up with requirements to solve the business needs of our customers was challenging and time-consuming as all development had to be done twice. Over time, tech debt for maintaining two codebases was getting high as the underlying platforms changed as well as new features and services for our customers in a rapidly growing userbase.
One additional underrated benefit is its seamless integration with Firebase (BaaS – Backend – as – a – Service platform by Google). Engineers can benefit from Firebase’s services like analytics, performance monitoring, crash reporting, app distribution to QA etc. which are available out of the box with minimal code/configuration changes.
We incorporated BLOC architecture to manage business logic and UI(view) separately. BLOC architecture helped us manage the state more effectively for the App as it was easy to have common state throughout the app for persistent user experience with improved security on user accessibility.
TELUS: Solving for workers’ safety with edge computing and 5G
Together with Google Cloud, we have been leveraging solutions with the power of MEC and 5G to develop a workers’ safety application in our Edmonton Data Center that enables on-premise video analytics cameras to screen manufacturing facilities and ensure compliance with safety requirements to operate heavy-duty machinery. The CCTV (closed-circuit television) cameras we used are cost-effective and easier to deploy than RTLS (real time location services) solutions that detect worker proximity and avoid collisions. This is a positive, proactive step to steadily improve workplace safety. For example, if a worker’s hand is close to a drill, that drill press will not bore holes in any surface until the video analytics camera detects that the worker’s hand has been removed from the safety zone area.
Introducing new Google Cloud manufacturing solutions: smart factories, smarter workers
The new manufacturing solutions from Google Cloud give manufacturing engineers and plant managers access to unified and contextualized data from across their disparate assets and processes.
Manufacturing Data Engine is the foundational cloud solution to process, contextualize and store factory data. The cloud platform can acquire data from any type of machine, supporting a wide range of data, from telemetry to image data, via a private, secure, and low cost connection between edge and cloud. With built-in data normalization and context-enrichment capabilities, it provides a common data model, with a factory-optimized data lakehouse for storage.
Manufacturing Connect is the factory edge platform co-developed with Litmus that quickly connects with nearly any manufacturing asset via an extensive library of 250-plus machine protocols. It translates machine data into a digestible dataset and sends it to the Manufacturing Data Engine for processing, contextualization and storage. By supporting containerized workloads, it allows manufacturers to run low-latency data visualization, analytics and ML capabilities directly on the edge.
Price optimization notebook for apparel retail using Google Vertex AI
One of the key requirements of a price optimization system is an accurate forecasting model to quickly simulate demand response to price changes. Historically, developing a Machine Learning forecast model required a long timeline with heavy involvement from skilled specialists in data engineering, data science, and MLOps. The teams needed to perform a variety of tasks in feature engineering, model architecture selection, hyperparameter optimization, and then manage and monitor deployed models.
Vertex AI Forecast provides advanced AutoML workflow for time series forecasting which helps dramatically reduce the engineering and research effort required to develop accurate forecasting models. The service easily scales up to large datasets with over 100 million rows and 1000 columns, covering years of data for thousands of products with hundreds of possible demand drivers. Most importantly it produces highly accurate forecasts. The model scored in the top 2.5% of submissions in M5, the most recent global forecasting competition which used data from Walmart.
Pathways Language Model (PaLM): Scaling to 540 Billion Parameters for Breakthrough Performance
Last year Google Research announced our vision for Pathways, a single model that could generalize across domains and tasks while being highly efficient. An important milestone toward realizing this vision was to develop the new Pathways system to orchestrate distributed computation for accelerators. In “PaLM: Scaling Language Modeling with Pathways”, we introduce the Pathways Language Model (PaLM), a 540-billion parameter, dense decoder-only Transformer model trained with the Pathways system, which enabled us to efficiently train a single model across multiple TPU v4 Pods. We evaluated PaLM on hundreds of language understanding and generation tasks, and found that it achieves state-of-the-art few-shot performance across most tasks, by significant margins in many cases.
UPS Expands Deal With Google Cloud to Prepare for Surge in Data
Logistics company to gain network, storage and compute capacity to help it analyze new data coming from initiatives such as RFID chips on packages
Robust Routing Using Electrical Flows
We view the road network as a graph, where intersections are nodes and roads are edges. Our method then models the graph as an electrical circuit by replacing the edges with resistors, whose resistances equal the road traversal time, and then connecting a battery to the origin and destination, which results in electrical current between those two points. In this analogy, the resistance models how time-consuming it is to traverse a segment. In this sense, long and congested segments have high resistances.
Improving PPA In Complex Designs With AI
The goal of chip design always has been to optimize power, performance, and area (PPA), but results can vary greatly even with the best tools and highly experienced engineering teams. AI works best in design when the problem is clearly defined in a way that AI can understand. So an IC designer must first see if there is a problem that can be tied to a system’s ability to adapt to, learn, and generalize knowledge/rules, and then apply these knowledge/rules to an unfamiliar scenario.
Can Robots Follow Instructions for New Tasks?
The results of this research show that simple imitation learning approaches can be scaled in a way that enables zero-shot generalization to new tasks. That is, it shows one of the first indications of robots being able to successfully carry out behaviors that were not in the training data. Interestingly, language embeddings pre-trained on ungrounded language corpora make for excellent task conditioners. We demonstrated that natural language models can not only provide a flexible input interface to robots, but that pretrained language representations actually confer new generalization capabilities to the downstream policy, such as composing unseen object pairs together.
In the course of building this system, we confirmed that periodic human interventions are a simple but important technique for achieving good performance. While there is a substantial amount of work to be done in the future, we believe that the zero-shot generalization capabilities of BC-Z are an important advancement towards increasing the generality of robotic learning systems and allowing people to command robots. We have released the teleoperated demonstrations used to train the policy in this paper, which we hope will provide researchers with a valuable resource for future multi-task robotic learning research.
Inside X’s Mission to Make Robots Boring
It’s research by Everyday Robots, a project of X, Alphabet’s self-styled “moonshot factory.” The cafe testing ground is one of dozens on the Google campus in Mountain View, California, where a small percentage of the company’s massive workforce has now returned to work. The project hopes to make robots useful, operating in the wild instead of controlled environments like factories. After years of development, Everyday Robots is finally sending its robots into the world—or at least out of the X headquarters building—to do actual work.
Chip floorplanning with deep reinforcement learning
AWS, Google, Microsoft apply expertise in data, software to manufacturing
As manufacturing becomes digitized, Google’s methodologies that were developed for the consumer market are becoming relevant for industry, said Wee, who previously worked in the semiconductor industry as an industrial engineer. “We believe we’re at a point in time where these technologies—primarily the analytics and AI area—that have been very difficult to use for the typical industrial engineer are becoming so easy to use on the shop floor,” he said. “That’s where we believe our competitive differentiation lies.”
Meanwhile, Ford is also selectively favoring human brain power over software to analyze data and turning more and more to in-house coders than applications vendors. “The solution will be dependent upon the application,” Mikula said. “Sometimes it will be software, and sometimes it’ll be a data analyst who crunches the data sources. We would like to move to solutions that are more autonomous and driven by machine learning and artificial intelligence. The goal is to be less reliant on purchased SaaS.”
Altana AI Raises $15M Series A Investment to Build the Single Source of Truth On the Global Supply Chain
Altana AI has secured $15 million in Series A funding, led by GV (formerly Google Ventures). Floating Point, Ridgeline Partners, and existing investors Amadeus Capital Partners and Schematic Ventures joined the round, which closed in May 2021.
The company’s AI platform — the Altana Atlas — connects and learns from billions of data points to create a living, intelligent map of global commerce. Multinational enterprises like Boston Scientific are connecting to the Altana Atlas to map their supply chains beyond their immediate suppliers, build more resilient supplier networks, and manage risk across their global footprint. Government agencies and global logistics providers in the US and abroad are using the Altana Atlas to surface illicit activity and security threats hiding in opaque supply chain networks. To enable compliant trade at the speed of e-commerce, the world’s largest logistics providers and customs agencies are using the Altana Atlas to expedite lawful shipments across borders while filtering out illicit shipments.
Altana is pioneering a unique federated machine learning approach that enables shared global intelligence without data sharing, unlocking information that was never before available to power artificial intelligence. Karim Faris, General Partner at GV said, “Altana has cracked the code on creating intelligence from data that cannot be brought together directly because of privacy, sovereignty, and intellectual property concerns. In just two-and-a-half years since its founding, Altana is already working with a number of the world’s most important government agencies, logistics providers, and enterprises to transform how they manage global supply chains.”
Introducing Intrinsic
Intrinsic is working to unlock the creative and economic potential of industrial robotics for millions more businesses, entrepreneurs, and developers. We’re developing software tools designed to make industrial robots (which are used to make everything from solar panels to cars) easier to use, less costly and more flexible, so that more people can use them to make new products, businesses and services.
Visual Inspection AI: a purpose-built solution for faster, more accurate quality control
The Google Cloud Visual Inspection AI solution automates visual inspection tasks using a set of AI and computer vision technologies that enable manufacturers to transform quality control processes by automatically detecting product defects.
We built Visual Inspection AI to meet the needs of quality, test, manufacturing, and process engineers who are experts in their domain, but not in AI. By combining ease of use with a focus on priority uses cases, customers are realizing significant benefits compared to general purpose machine learning (ML) approaches.
Toward Generalized Sim-to-Real Transfer for Robot Learning
A limitation for their use in sim-to-real transfer, however, is that because GANs translate images at the pixel-level, multi-pixel features or structures that are necessary for robot task learning may be arbitrarily modified or even removed.
To address the above limitation, and in collaboration with the Everyday Robot Project at X, we introduce two works, RL-CycleGAN and RetinaGAN, that train GANs with robot-specific consistencies — so that they do not arbitrarily modify visual features that are specifically necessary for robot task learning — and thus bridge the visual discrepancy between sim and real.
Learning to Manipulate Deformable Objects
In “Learning to Rearrange Deformable Cables, Fabrics, and Bags with Goal-Conditioned Transporter Networks,” to appear at ICRA 2021, we release an open-source simulated benchmark, called DeformableRavens, with the goal of accelerating research into deformable object manipulation. DeformableRavens features 12 tasks that involve manipulating cables, fabrics, and bags and includes a set of model architectures for manipulating deformable objects towards desired goal configurations, specified with images. These architectures enable a robot to rearrange cables to match a target shape, to smooth a fabric to a target zone, and to insert an item in a bag. To our knowledge, this is the first simulator that includes a task in which a robot must use a bag to contain other items, which presents key challenges in enabling a robot to learn more complex relative spatial relations.
Google Cloud and Seagate: Transforming hard-disk drive maintenance with predictive ML
At Google Cloud, we know first-hand how critical it is to manage HDDs in operations and preemptively identify potential failures. We are responsible for running some of the largest data centers in the world—any misses in identifying these failures at the right time can potentially cause serious outages across our many products and services. In the past, when a disk was flagged for a problem, the main option was to repair the problem on site using software. But this procedure was expensive and time-consuming. It required draining the data from the drive, isolating the drive, running diagnostics, and then re-introducing it to traffic.
That’s why we teamed up with Seagate, our HDD original equipment manufacturer (OEM) partner for Google’s data centers, to find a way to predict frequent HDD problems. Together, we developed a machine learning (ML) system, built on top of Google Cloud, to forecast the probability of a recurring failing disk—a disk that fails or has experienced three or more problems in 30 days.
Multi-Task Robotic Reinforcement Learning at Scale
For general-purpose robots to be most useful, they would need to be able to perform a range of tasks, such as cleaning, maintenance and delivery. But training even a single task (e.g., grasping) using offline reinforcement learning (RL), a trial and error learning method where the agent uses training previously collected data, can take thousands of robot-hours, in addition to the significant engineering needed to enable autonomous operation of a large-scale robotic system. Thus, the computational costs of building general-purpose everyday robots using current robot learning methods becomes prohibitive as the number of tasks grows.
Way beyond AlphaZero: Berkeley and Google work shows robotics may be the deepest machine learning of all
With no well-specified rewards and state transitions that take place in a myriad of ways, training a robot via reinforcement learning represents perhaps the most complex arena for machine learning.
Rearranging the Visual World
Transporter Nets use a novel approach to 3D spatial understanding that avoids reliance on object-centric representations, making them general for vision-based manipulation but far more sample efficient than benchmarked end-to-end alternatives. As a consequence, they are fast and practical to train on real robots. We are also releasing an accompanying open-source implementation of Transporter Nets together with Ravens, our new simulated benchmark suite of ten vision-based manipulation tasks.
Edge-Inference Architectures Proliferate
What makes one AI system better than another depends on a lot of different factors, including some that aren’t entirely clear.
The new offerings exhibit a wide range of structure, technology, and optimization goals. All must be gentle on power, but some target wired devices while others target battery-powered devices, giving different power/performance targets. While no single architecture is expected to solve every problem, the industry is in a phase of proliferation, not consolidation. It will be a while before the dust settles on the preferred architectures.
RightHand Robotics raises $23 million from Menlo Ventures, Google
With its reinforced bank account, Somerville, Mass.-based RightHand plans to expand its business and technical teams and broaden its suite of product applications, the firm said. “The funds will be used to support our growth and in hiring people as fast as we effectively can,” Martinelli said. “We’re getting follow-on orders and we need to support those orders and extend the product line, both for projects in the U.S. and in Europe and Japan.”
Google Glass Didn't Disappear. You Can Find It On The Factory Floor
With Google Glass, she scans the serial number on the part she’s working on. This brings up manuals, photos or videos she may need. She can tap the side of headset or say “OK Glass” and use voice commands to leave notes for the next shift worker.
Peggy Gullick, business process improvement director with AGCO, says the addition of Google Glass has been “a total game changer.” Quality checks are now 20 percent faster, she says, and it’s also helpful for on-the-job training of new employees. Before this, workers used tablets.
Augmented Reality Is Already Improving Worker Performance
The video below, for example, shows a side-by-side time-lapse comparison of a GE technician wiring a wind turbine’s control box using the company’s current process, and then doing the same task while guided by line-of-sight instructions overlaid on the job by an AR headset. The device improved the worker’s performance by 34% on first use.
There’s been concern about machines replacing human workers, and certainly this is happening for some jobs. But the experience at General Electric and other industrial firms shows that, for many jobs, combinations of humans and machines outperform either working alone. Wearable augmented reality devices are especially powerful, as they deliver the right information at the right moment and in the ideal format, directly in workers’ line of sight, while leaving workers’ hands free so they can work without interruption. This dramatically reduces the time needed to complete a job because workers needn’t stop what they’re doing to flip through a paper manual or engage with a device or workstation. It also reduces errors because the AR display provides explicit guidance overlaid on the work being done, delivered on demand. Workers need only follow the detailed instructions directly in front of them in order to move through a sequence of steps to completion. If they encounter problems, they can launch training videos or connect by video with remote experts to share what they see through their smart glasses and get real-time assistance.