“Robin deals with a world where things are changing all around it”

An advanced perception system, which detects and learns from its own mistakes, enables Robin robots to select individual objects from jumbled packages — at production scale.

Inside an Amazon fulfillment center, as packages roll down a conveyor, the Robin robotic arm goes to work. It dips, picks up a package, scans its, and places it on a small drive robot that routes it to the correct loading dock. By the time the drive has dropped off its package, Robin has loaded several more delivery robots.

While Robin looks a lot like other robotic arms used in industry, its vision system enables it to see and react to the world in an entirely different way.

“Most robotic arms work in a controlled environment,” explained Charles Swan, a senior manager of software development at Amazon Robotics & AI. “If they weld vehicle frames, for example, they expect the parts to be in a fixed location and follow a pre-scripted set of motions. They do not really perceive their environment.

Related content
While these systems look like other robot arms, they embed advanced technologies that will shape Amazon's robot fleet for years to come.

“Robin deals with a world where things are changing all around it. It understands what objects are there — different sized boxes, soft packages, envelopes on top of other envelopes — and decides which one it wants and grabs it. It does all these things without a human scripting each move that it makes. What Robin does is not unusual in research. But it is unusual in production.”

Yet, thanks to machine learning, Robin and its advanced perception system are moving rapidly into production. When Swan began working with the robot in 2021, Amazon was operating only a couple dozen units at its fulfillment centers. Today, Swan’s team is significantly scaling that perception system.

To reach that goal, Amazon Robotics researchers are exploring ways for Robin to achieve unparalleled levels of production accuracy. Because Amazon is so focused on improving the customer experience through timely deliveries, even 99.9% accuracy doesn’t meet the mark for robotics researchers.

Training day

Over the past five years, machine learning has significantly advanced the ability of robots to see, understand, and reason about their environment.

Robin perception testing
Model 1 from October 2021 — The model misses two black packages and one occluded package.

In the past, classical computer vision algorithms systematically segmented scenes into individual elements, a slow and computationally intensive approach. Supervised machine learning has made that process more efficient.

robinperceptiontest2.png
Model 2 from November 2021 — The black packages are detected, but a heavily occluded one is still missed.

“We don’t explicitly say how the model should learn,” said Bhavana Chandrashekhar, a software development manager at Amazon Robotics & AI. “Instead, we give it an input image and say, ‘This is an object.’ Then it tries to identify the object in the image, and we grade how well it does that. Using only that supervised feedback, the model learns how to extract features from the images so it can classify the objects in them.”

robinperceptiontest3.png
Model 3 from February 2022 — All packages are correctly detected.

Robin’s perception system started with pre-trained models that could already identify object elements like edges and planes.

Next, it was taught to identify the type of packages found within the fulfillment center’s sortation area.

Machine learning models learn best when provided with an abundance of sample images. Yet, despite shipping millions of packages daily, Chandrashekhar’s team initially found it hard to find enough training data to capture the enormous variation of the boxes and packages continuously rolling down a conveyor.

“Everything comes in a jumble of sizes and shapes, some on top of the other, some in the shadows,” Chandrashekhar said. “During the holidays, you might see pictures of Minions or Billy Eilish mixed in with our usual brown and white packages. The taping might change.

“Sometimes, the differences between one package and another are hard to see, even for humans. You might have a white envelope on another white envelope, and both are crinkled so you can’t tell where one begins and the other ends,” she explained.

To teach Robin’s model to make sense of what it sees, researchers gathered thousands of images, drew lines around features like boxes, yellow, brown and white mailers, and labels, and added descriptions. The team then used these annotated images to continually retrain the robot.

The training continued in a simulated production environment, with the robot working on a live conveyor with test packages.

Whenever Robin failed to identify an object or make a pick, the researchers would annotate the errors and add them to the training deck. This on-going training regimen significantly improved the robot’s efficiency.

Continual learning

Robin’s success rate during these tests improved markedly, but the researchers pushed for near perfection. “We want to be really good at these random edge problems, which happen only a few times during testing, but occur more often in field when we’re running at larger scale,” Chandrashekhar said.

Because of Robin’s high accuracy rate in testing, researchers found it difficult to find enough of those mistakes to create a dataset for further training. “In the beginning, we had to imagine how the robot would make a mistake in order to create the type of data we could use to improve the model,” Chandrashekhar explained.

The Amazon team also monitored Robin’s confidence in its decisions. The perception model might, for example, indicate it was confident about spotting a package, but less confident about assigning it to a specific type of package. Chandrashekhar’s team developed a framework to ensure those low-confidence images were automatically sent for annotation by a human and then added back to the training deck.

Amazon's Robin robotic arm is seen inside a facility gripping a package
While Robin looks a lot like other robotic arms used in industry, its vision system enables it to see and react to the world in an entirely different way.

“This is part of continual learning,” says Jeremy Wyatt, senior manager of applied science. “It’s incredibly powerful because every package becomes a learning opportunity. Every robot contributes experiences that helps the entire fleet get better.”

That continual learning led to big improvements. “In just six months, we halved the number of packages Robin’s perception system can’t pick and we reduced the errors the perception system makes by a factor of 10,” Wyatt notes.

Still, robots will make mistakes in production that have to be corrected. What happens in the moment if Robin drops a package or puts two mailers on one sortation robot? While most production robots are oblivious to mistakes, Robin is an exception. It monitors its performance for missteps.

Robin’s quality assurance system oversees how it handles packages. If it identifies a problem, it will try to fix it on its own, or call for human intervention if it cannot. “If Robin finds and corrects a mistake, it might lose some time,” Swan explained. “However, if that error wasn’t addressed at all, we might lose a day or two getting that product to the customer.”

Scaling Robin perception

Swan joined the Robin perception team when there were only a few dozen units in production. His goal: scale the perception system to thousands of robotic arms. To accomplish this, Swan’s team doesn’t just focus on catching and annotating errors for continual learning, it seeks the root cause of those errors.

They rely on Robin perception’s user interface, which lets engineers look through the robot’s eyes and trace how its vision system made the decision. They might, for example, find a Robin that picked up two packages because it could not distinguish one from the other, or another that failed to grab any package owing to a noisy depth signal. Auditing Robin’s decisions lets Amazon Robotics engineers fine-tune the robot’s behaviors.

This is complemented by the metrics derived from a fleet of machines sorting well over 1 million items every day. “Once you have that kind of data, then you can start to look for correlations,” Swan said. “Then you can say the latency in making a decision is related to this property of the machine or this property of the scene and that’s something we can focus on.”

Fleet metrics provide data about a greater range of scenes and problems than any one machine would ever see, from a broken light to an address label stuck on the conveyor belt. That data, used to retrain Robin every few days, gives it a much broader understanding of the world in which it works.

The Robin robotic arm sorts packages

It also helps Amazon improve efficiency. Before Robin picks up a package, it must first segment a cluttered scene, decide which package it will grab, calculate how it will approach the package, and choose how many of its eight suction cups to use to pick it up. Choose too many and it might lift more than one package; too few, and it could drop its cargo.

That decision requires much more than computer vision. “Making decisions on what and where to grasp is accomplished with a combination of learning systems, optimization, geometric reasoning, and 3D understanding,” explained Nick Hudson, principal applied scientist with Amazon Robotics AI. “There are a lot of components which interact, and they all need to accommodate the variations seen across different sites and regions.”

“There is always a tradeoff between efficiency and good decisions,” Swan continued. “That was a major scaling challenge. We did a lot of experimentation offline with very cluttered scenes and other situations that slowed the robots down to improve our algorithms. When we liked them, we would run them on a small portion of the fleet. If they did well, we would roll them out to all the robots.”

Related content
The collaboration will support research, education, and outreach efforts in areas of mutual interest, beginning with artificial intelligence and robotics.

Those rollouts were also made possible because the software was rewritten to support regular updates, said Sicong Zhao, a software development manager. “The software is modular. That way, we can upgrade one component without affecting the others. It also enables multiple groups to work on different improvements at the same time.” That modularity has enabled key parts of the perception system to be automatically retrained twice a week.

Nor was that a simple task. Robin had many tens of thousands of lines of code, so it took Zhao’s team months to understand how those lines interacted with one another well enough to modularize their components. The effort was worth it. It made Robin easier to upgrade and will ultimately enable automatic fleet updates as frequently as needed while mitigating operational disruptions.

Next-generation robot perception

Those continuous improvements are essential to deploy Robin at Amazon’s scale, Swan explained. The team’s goal is to update the fleet of Robin robots automatically several times weekly.

“We are increasing our usage of Robin,” Swan said. “To do that, we must continue to improve Robin’s ability to handle those random edge cases, so it never mis-sorts, has great motion planning, and moves at the fastest safe speed its arm can handle — all with time to spare.”

That means even more innovation. Take, for example, package recognition. Robin’s perception system needs to be able to spot a pile of packages and know to start with the top one to avoid upending the pile. “Robin has a sense of how to do that as well, but we need machine learning to accelerate the way Robin decides which one it is most likely to pick up successfully as we keep adding new types of packaging,” Zhao explained.

Related content
Scientists and engineers are developing a new generation of simulation tools accurate enough to develop and test robots virtually.

Chandrashekhar believes more powerful digital simulations, based on the physics of robot and package movement, will enable faster innovation. “This is very difficult when we’re talking about deformable packages, like a water bottle in a soft mailer,” she said. “But we’re getting a lot closer.”

Longer-term, she wants to see self-learning robots that teach themselves to make fewer mistakes and to recover from them faster. Self-learning will also make the robots easier to use. “Deploying a robot shouldn’t require a PhD,” Swan said.

We’ve only scratched the surface of what’s possible with robots.
Charles Swan

“There is a unique opportunity to have this fleet adapt automatically,” agreed Hudson. “There are open questions on how to accomplish this, including whether individual robots should adapt on their own. The fleet already updates its object understanding using data collected worldwide. How can we also have the individual robots adapt to issues they are seeing locally – for instance if one of the suction cups is blocked or torn?”

Ultimately, though, Swan would like to use what Amazon Robotics researchers have learned to create new types of robots. “We’ve only scratched the surface of what’s possible with robots,” he said.

Research areas

Related content

GB, Cambridge
Amazon Devices is an inventive research and development company that designs and engineer high-profile devices like Echo, Fire Tablets, Fire TV, and other consumer devices. We are looking for exceptional scientists to join our Applied Science team to advance the state-of-the-art in developing efficient multimodal language models across our product portfolio. Through close hardware-software integration, we design and train models for resource efficiency across the hardware and software tech stack. The Silicon and Solutions Group Edge AI team is looking for a talented Sr. Applied Scientist who will lead our efforts on inventing evaluation methods for multimodal language models and agents for new devices, including audio and vision experiences. Key job responsibilities - Collaborate with cross-functional engineers and scientists to advance the state of the art in multimodal model evaluations for devices, including audio, images, and videos - Invent and validate reliability for novel automated evaluation methods for perception tasks, such as fine-tuned LLM-as-judge - Develop and extend our evaluation framework(s) to support expanding capabilities for multimodal language models - Analyze large offline and online datasets to understand model gaps, develop methods to interpret model failures, and collaborate with training teams to enhance model capabilities for product use cases - Work closely with scientists, compiler engineers, data collection, and product teams to advance evaluation methods - Mentor less experienced Applied Scientists A day in the life As a Scientist with the Silicon and Solutions Group Edge AI team, you'll contribute to innovative methods for evaluating new product experiences and discover ways to enhance our model capabilities and enrich our customer experiences. You'll research new methods for reliably assessing perception capabilities for audio-visual tasks in multimodal language models, design and implement new metrics, and develop our evaluation framework. You'll collaborate across teams of engineers and scientists to identify and root cause issues in models and their system integration to continuously enhance the end-to-end experience. About the team Our Edge AI science team brings together our unique skills and experiences to deliver state-of-the-art multimodal AI models that enable new experiences on Amazon devices. We work at the intersection of hardware, software, and science to build models designed for our custom silicon.
US, CA, Sunnyvale
The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Applied Scientist with a strong deep learning background, to help build industry-leading technology with generative AI (GenAI) and multi-modal systems. Key job responsibilities As an Applied Scientist with the AGI team, you will work with talented peers to develop algorithms and modeling techniques to advance the state of the art with multi-modal systems. Your work will directly impact our customers in the form of products and services that make use of vision and language technology. You will leverage Amazon’s large-scale computing resources to accelerate development with multi-modal Large Language Models (LLMs) and GenAI in Computer Vision. About the team The AGI team has a mission to push the envelope with multimodal LLMs and GenAI in Computer Vision, in order to provide the best-possible experience for our customers.
US, MA, Boston
The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Applied Scientist with a strong deep learning background, to build industry-leading Generative Artificial Intelligence (GenAI) technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As an Applied Scientist with the AGI team, you will work with talented peers to lead the development of algorithms and modeling techniques, to advance the state of the art with LLMs. Your work will directly impact our customers in the form of products and services that make use of GenAI technology. You will leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate advances in LLMs. About the team The AGI team has a mission to push the envelope in GenAI with LLMs and multimodal systems, in order to provide the best-possible experience for our customers.
US, WA, Seattle
Prime Video is a first-stop entertainment destination offering customers a vast collection of premium programming in one app available across thousands of devices. Prime members can customize their viewing experience and find their favorite movies, series, documentaries, and live sports – including Amazon MGM Studios-produced series and movies; licensed fan favorites; and programming from Prime Video add-on subscriptions such as Apple TV+, Max, Crunchyroll and MGM+. All customers, regardless of whether they have a Prime membership or not, can rent or buy titles via the Prime Video Store, and can enjoy even more content for free with ads. Are you interested in shaping the future of entertainment? Prime Video's technology teams are creating best-in-class digital video experience. As a Prime Video technologist, you’ll have end-to-end ownership of the product, user experience, design, and technology required to deliver state-of-the-art experiences for our customers. You’ll get to work on projects that are fast-paced, challenging, and varied. You’ll also be able to experiment with new possibilities, take risks, and collaborate with remarkable people. We’ll look for you to bring your diverse perspectives, ideas, and skill-sets to make Prime Video even better for our customers. With global opportunities for talented technologists, you can decide where a career Prime Video Tech takes you! Key job responsibilities Develop ML models for various recommendation & search systems using deep learning, online learning, and optimization methods Work closely with other scientists, engineers and product managers to expand the depth of our product insights with data, create a variety of experiments to determine the high impact projects to include in planning roadmaps Stay up-to-date with advancements and the latest modeling techniques in the field Publish your research findings in top conferences and journals A day in the life We're using advanced approaches such as foundation models to connect information about our videos and customers from a variety of information sources, acquiring and processing data sets on a scale that only a few companies in the world can match. This will enable us to recommend titles effectively, even when we don't have a large behavioral signal (to tackle the cold-start title problem). It will also allow us to find our customer's niche interests, helping them discover groups of titles that they didn't even know existed. We are looking for creative & customer obsessed machine learning scientists who can apply the latest research, state of the art algorithms and ML to build highly scalable page personalization solutions. You'll be a research leader in the space and a hands-on ML practitioner, guiding and collaborating with talented teams of engineers and scientists and senior leaders in the Prime Video organization. You will also have the opportunity to publish your research at internal and external conferences.
US, CA, Palo Alto
About Sponsored Products and Brands The Sponsored Products and Brands (SPB) team at Amazon Ads is re-imagining the advertising landscape through generative AI technologies, revolutionizing how millions of customers discover products and engage with brands across Amazon.com and beyond. We are at the forefront of re-inventing advertising experiences, bridging human creativity with artificial intelligence to transform every aspect of the advertising lifecycle from ad creation and optimization to performance analysis and customer insights. We are a passionate group of innovators dedicated to developing responsible and intelligent AI technologies that balance the needs of advertisers, enhance the shopping experience, and strengthen the marketplace. If you're energized by solving complex challenges and pushing the boundaries of what's possible with AI, join us in shaping the future of advertising. About our team SPB Ad Response Prediction team is your choice, if you want to join a highly motivated, collaborative, and fun-loving team with a strong entrepreneurial spirit and bias for action. We are seeking an experienced and motivated Applied Scientist with machine learning engineering background who loves to innovate at the intersection of customer experience, deep learning, and high-scale machine learning systems. We are looking for a talented Applied Scientist with a strong background in machine learning engineering to join our team and help us grow the business. In this role, you will partner with a team of engineers and scientists to build advanced machine learning models and infrastructure, from training to inference, including emerging LLM-based systems, that deliver highly relevant ads to shoppers across all Amazon platforms and surfaces worldwide. Key job responsibilities As an Applied Scientist, you will: * Develop scalable and effective machine learning models and optimization strategies to solve business problems. * Conduct research on new machine learning modeling to optimize all aspects of Sponsored Products business. * Enhance the scalability, automation, and efficiency of large-scale training and real-time inference systems. * Pioneer the development of LLM inference infrastructure to support next-generation GenAI workloads at Amazon Ads scale.
US, WA, Seattle
The Economics Science team in the Amazon Manager Experience (AMX) organization builds science models supporting employee career-related experiences such as their evaluation, learning and development, onboarding, and promotion. Additionally, the team conducts experiments for a wide range of employee and talent-related product features, and measures the impact of product and program initiatives in enhancing our employees' career experiences at Amazon. The team is looking for an Economist who specializes in the field of macroeconomics and time series forecasting. This role combines traditional macroeconomic analysis with modern data science techniques to enhance understanding and forecasting of workforce dynamics at scale. Key job responsibilities The economists within ALX focus on enhancing causal evaluation, measurement, and experimentation tasks to ensure various science integrations and interventions achieve their goals in building more rewarding careers for our employees. The economists develop and implement complex randomization designs that address the nuances of experimentation in complex settings where multiple populations interact. Additionally, they engage in building a range of econometric models that surface various proactive and reactive inspection signals, aiming toward better alignment in the implementation of talent processes. The economists closely collaborate with scientists from diverse backgrounds, as well as program and product leaders, to implement and assess science solutions in our products.
GB, London
Are you looking to work at the forefront of Machine Learning and AI? Would you be excited to apply Generative AI algorithms to solve real world problems with significant impact? The Generative AI Innovation Center helps AWS customers implement Generative AI solutions and realize transformational business opportunities. This is a team of strategists, scientists, engineers, and architects working step-by-step with customers to build bespoke solutions that harness the power of generative AI. The team helps customers imagine and scope the use cases that will create the greatest value for their businesses, define paths to navigate technical or business challenges, develop proof-of-concepts, and make plans for launching solutions at scale. The GenAI Innovation Center team provides guidance on best practices for applying generative AI responsibly and cost efficiently. You will work directly with customers and innovate in a fast-paced organization that contributes to game-changing projects and technologies. You will design and run experiments, research new algorithms, and find new ways of optimizing risk, profitability, and customer experience. We’re looking for Data Scientists capable of using GenAI and other techniques to design, evangelize, and implement state-of-the-art solutions for never-before-solved problems. Key job responsibilities As a Data Scientist, you will • Collaborate with AI/ML scientists and architects to research, design, develop, and evaluate generative AI solutions to address real-world challenges • Interact with customers directly to understand their business problems, aid them in implementation of generative AI solutions, brief customers and guide them on adoption patterns and paths to production • Create and deliver best practice recommendations, tutorials, blog posts, sample code, and presentations adapted to technical, business, and executive stakeholder • Provide customer and market feedback to product and engineering teams to help define product direction About the team Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud.
IN, KA, Bengaluru
The Amazon Alexa AI team in India is seeking a talented, self-driven Applied Scientist to work on prototyping, optimizing, and deploying ML algorithms within the realm of Generative AI. The Applied Scientist will be in a team of exceptional scientists to develop novel algorithms and modeling techniques to advance the state of the art in Natural Language Processing (NLP) or Computer Vision (CV) related tasks. They will work in a hybrid, fast-paced organization where scientists, engineers, and product managers work together to build customer facing experiences. They will collaborate with and mentor other scientists to raise the bar of scientific research in Amazon. Their work will directly impact our customers in the form of products and services that make use of speech, language, and computer vision technologies. Key job responsibilities - Research, experiment and build Proof Of Concepts advancing the state of the art in AI & ML for GenAI. - Collaborate with cross-functional teams to architect and execute technically rigorous AI projects. - Thrive in dynamic environments, adapting quickly to evolving technical requirements and deadlines. - Engage in effective technical communication (written & spoken) with coordination across teams. - Conduct thorough documentation of algorithms, methodologies, and findings for transparency and reproducibility. - Publish research papers in internal and external venues of repute - Support on-call activities for critical issues A day in the life You will be working with a group of talented scientists on researching algorithm and running experiments to test scientific proposal/solutions to improve solutions powering customer experience on Alexa+. This will involve collaboration with partner teams including engineering, PMs, data annotators, and other scientists to discuss data quality, policy, and model development. You will mentor other scientists, review and guide their work, help develop roadmaps for the team. You work closely with partner teams across Alexa to deliver platform features that require cross-team leadership.
US, CA, Sunnyvale
As a Principal Scientist within the Artificial General Intelligence (AGI) organization, you are a trusted part of the technical leadership. You bring business and industry context to science and technology decisions. You set the standard for scientific excellence and make decisions that affect the way we build and integrate algorithms. You solicit differing views across the organization and are willing to change your mind as you learn more. Your artifacts are exemplary and often used as reference across organization. You are a hands-on scientific leader. Your solutions are exemplary in terms of algorithm design, clarity, model structure, efficiency, and extensibility. You tackle intrinsically hard problems, acquiring expertise as needed. You decompose complex problems into straightforward solutions. You amplify your impact by leading scientific reviews within your organization or at your location. You scrutinize and review experimental design, modeling, verification and other research procedures. You probe assumptions, illuminate pitfalls, and foster shared understanding. You align teams toward coherent strategies. You educate, keeping the scientific community up to date on advanced techniques, state of the art approaches, the latest technologies, and trends. You help managers guide the career growth of other scientists by mentoring and play a significant role in hiring and developing scientists and leads. You will play a critical role in driving the development of Generative AI (GenAI) technologies that can handle Amazon-scale use cases and have a significant impact on our customers' experiences. Key job responsibilities You will be responsible for defining key research directions, adopting or inventing new machine learning techniques, conducting rigorous experiments, publishing results, and ensuring that research is translated into practice. You will develop long-term strategies, persuade teams to adopt those strategies, propose goals and deliver on them. You will also participate in organizational planning, hiring, mentorship and leadership development. You will be technically fearless and with a passion for building scalable science and engineering solutions. You will serve as a key scientific resource in full-cycle development (conception, design, implementation, testing to documentation, delivery, and maintenance).
US, CA, Mountain View
Amazon launched the Generative AI Innovation Center (GAIIC) in Jun 2023 to help AWS customers accelerate the use of Generative AI to solve business and operational problems and promote innovation in their organization (https://press.aboutamazon.com/2023/6/aws-announces-generative-ai-innovation-center). GAIIC provides opportunities to innovate in a fast-paced organization that contributes to game-changing projects and technologies that get deployed on devices and in the cloud. As an Applied Science Manager in GAIIC, you'll partner with technology and business teams to build new GenAI solutions that delight our customers. You will be responsible for directing a team of data/research/applied scientists, deep learning architects, and ML engineers to build generative AI models and pipelines, and deliver state-of-the-art solutions to customer’s business and mission problems. Your team will be working with terabytes of text, images, and other types of data to address real-world problems. The successful candidate will possess both technical and customer-facing skills that will allow you to be the technical “face” of AWS within our solution providers’ ecosystem/environment as well as directly to end customers. You will be able to drive discussions with senior technical and management personnel within customers and partners, as well as the technical background that enables them to interact with and give guidance to data/research/applied scientists and software developers. The ideal candidate will also have a demonstrated ability to think strategically about business, product, and technical issues. Finally, and of critical importance, the candidate will be an excellent technical team manager, someone who knows how to hire, develop, and retain high quality technical talent. About the team About the team Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture AWS values curiosity and connection. Our employee-led and company-sponsored affinity groups promote inclusion and empower our people to take pride in what makes us unique. Our inclusion events foster stronger, more collaborative teams. Our continual innovation is fueled by the bold ideas, fresh perspectives, and passionate voices our teams bring to everything we do. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve.