Andreessen Horowitz shared their enterprise adoption report for GenAI last week, and here's some key trends they've shared+I've observed as a GenAI consultant. Growth in Generative AI (GenAI) Adoption: Generative AI consumer spend exceeded $1 billion quickly in 2023. Anticipated enterprise revenue from GenAI expected to surpass consumer market in 2024. Initial Enterprise Engagement with GenAI: Mostly limited to a few obvious use cases and "GPT-wrapper" products. Skepticism existed regarding GenAI scaling in enterprises and its profitability. Increasing Enterprise Resource Allocation to GenAI: Significant increase in budgets for GenAI within six months; nearly tripling in some cases. Expansion into a variety of use cases and transitioning workloads into production. GenAI considered a strategic initiative; foundational models being built and deployed. Budget Allocation and Return on Investment (ROI): Average spend on GenAI in 2023 was $7M among surveyed companies. Future spending projected to increase 2x to 5x in 2024. Budget reallocation from one-time innovation funds to recurring software lines. ROI measurement focuses on productivity, customer satisfaction, revenue generation, savings, efficiency, and accuracy. Talent and Implementation Needs: Demand for highly specialized technical talent to scale GenAI solutions. Professional services offered by model providers for custom development are in demand. Trends Towards Multi-Model and Open Source: Enterprises are adopting multiple models to avoid vendor lock-in and stay ahead. A shift from dominance of closed-source models towards open-source adoption is notable. Preference for open-source due to control, customization, and security concerns. Customization and Cloud Influence: Enterprises prefer fine-tuning over building models from scratch. Cloud service providers influence purchasing decisions, with preferences divided by CSP loyalty. Early Features and Model Performance: Early-to-market features and model performance are key factors in adoption. Perception that model performances are converging, especially after fine-tuning. Designing for Flexibility: Applications are being designed for easy model interchangeability to avoid dependency. Building In-House Versus Buying: Enterprises focus on building in-house applications, incorporating APIs from foundational models. Potential shift expected when enterprise-focused AI apps enter the market. Internal Versus External Use Cases: Greater enthusiasm for internal use cases due to concerns about public perception and safety. Cautious approach to deploying genAI in consumer-facing sectors due to risks. Market Opportunity and Future Growth: Model API and fine-tuning market projected to reach $5B run-rate by end of 2024. Increase in genAI deal size and faster closure times indicating rapid market growth. Wider opportunities beyond foundational models, including tooling, model serving, and application building.
Generative AI Model Updates and Trends
Explore top LinkedIn content from expert professionals.
-
-
Happy New Year! If you are an Enterprise CTO, you are probably thinking about your GenAI strategy. Here's a decent write-up by Gartner: https://gtnr.it/3RQodsK. To augment that, here are some #genai trends to track and act on in 2024: --Open and Smaller Models: Open models like Llama, Mistral, BERT, and FLAN are becoming competitive with larger, closed-source models. They're suitable for many use cases and offer transparency for Responsible AI. In my opinion, open & closed models are not in a zero-sum game; BOTH should be used for the right use case. Action: Implement a clear plan for using different models. Amazon Web Services (AWS) users can leverage Bedrock & SageMaker (https://bit.ly/3vkX4qa). -- Domain-Adapted Models: Use your enterprise proprietary data to extend a large language model via continued pre-training (CPT) for domain-specific tasks. Action: Assess your use cases and data for CPT alongside Fine-tuning. Learn more: https://lnkd.in/eNjbQm-m -- Multi-modal Models (MMMs): MMMs will gain prominence in 2024. Both commercial (like GPT-4V) and open-source models (like Llava) will be popular. Action: Expand into business cases served by MMMs. More about Llava: https://lnkd.in/eTvn82iM. -- AI Agents (RAG+++): AI agents using LLMs can improve upon RAG by intelligently utilizing multiple data sources. Action: Prepare APIs and Data Sources for AI agents. More information: https://lnkd.in/eVf_J_bA. -- LLMs with Graphs: Graphs are one of the best representations of real-world knowledge which when combined with LLMs can be very effective in various domains. Action: Identify suitable business cases and explore Graphs+LLMs. Details: https://lnkd.in/eF68FbVA. -- AI Routers: Most enterprises will end up using a dozen or more models and it will become necessary to manage multiple models - Auth, Audit and Smart Model Selection. Action: Build an AI router. AWS Bedrock can assist, but more is needed. Info: https://go.aws/4aFLkyA. -- FinOps meets MLOps: Focus on cost optimization for GenAI projects. 2023 was all about GenAI POCs; 2024 will be about production & big bills! Action: Learn about GenAI business cases and FinOps for GenAI: https://lnkd.in/ezfV8NTa. --Make AI Invisible. Technology is at its best when its invisible and seamless to the end user. Action: Look at existing enterprise applications and look for ways to rethink the user experience using GenAI while keeping the tech invisible. (https://bit.ly/3vkXvAO) What are you tracking? Watch out for more on domain-focused AI trends in areas like AI for the Edge, robotics, and Drug discovery etc. in upcoming posts.
-
Open Source Generative AI: Unpacking Last Week's Groundbreaking Releases Last week, the #AI community witnessed an unprecedented surge in open source innovation with the release of four major foundation models: DBRX by Databricks, Grok 1.5 by x.ai, Samba-CoE 0.2 by SambaNova Systems and Jamba by AI21 Labs. This marks a significant shift in the landscape, challenging our perceptions of open source in the realm of generative AI. Here’s a quick breakdown of what makes each model stand out: - DBRX introduces a mixture-of-experts architecture, selecting the most relevant sub-models dynamically for enhanced decision-making. This could revolutionize adaptive responses in AI. - Grok 1.5 expands its context window to 128k, offering unparalleled reasoning capabilities. Its potential for complex problem-solving is immense. - Samba-CoE 0.2 outperforms predecessors with a staggering 330 tokens per second, redefining efficiency in AI processing. - Jamba merges transformers with structured state space models (SSM), enhancing context length capabilities significantly. These developments represent more than technological breakthroughs; they signify a reimagining of open source philosophy in the age of AI. By sharing model weights while keeping datasets & processes under wraps, companies are navigating the fine line between collaboration and competitive edge. Implications: The evolution of open source generative AI is not just a narrative of technological advancement; it's a call to action for strategic leadership. As industry leaders, we must consider how these models can be integrated into our operations and offerings, driving forward innovation while maintaining ethical standards. #AI #GenerativeAI #OpenSource #Innovation #Leadership #llms #generativeAI