Amazon Web Services (AWS) has committed to accelerating generative AI services to make a big impact in industries, from healthcare and life sciences to media, entertainment, and education.
Generative AI is a subset of machine learning technology powered by super large ML (machine learning) models, including large language models (LLMs) and multimodal models. (e.g. text, images, video, and audio). Apps like ChatGPT and Stable Diffusion have captured everyone’s attention and imagination, and these expectations are for good reason
AWS believes that AI and ML are among the most transformative technologies of our time, capable of solving some of humanity’s toughest challenges. That is why over the past 20 years, Amazon has invested heavily in AI and ML development and applying these technologies in all business areas.
To push this technology forward, the company also launched Amazon Bedrock – a new service for building and extending generative AI applications, which are applications that can edit text, and create images, and sound. and aggregated data on demand. Amazon Bedrock makes it easy for customers to access platform models (FMs) – mega-ML models from leading AI-powered startups, including AI21, Anthropic, and Stability AI, as well as giving them access to AWS-developed Titan platform models.
Super large ML models require massive computing power to operate. AWS Inferentia chip has the highest energy efficiency and the lowest cost to run complex generative AI inference workloads (such as model running and query response in production) at scale on AWS.
In addition, the company also introduced Amazon CodeWhisperer – a service that uses generated AI to suggest program code in real-time, based on user comments and their previous program code. Individual software developers can access Amazon CodeWhisperer for free, with no restrictions on use (they can choose from different paid plans to use professional features, such as admin capabilities. and additional security for the business)