3 real-world generative AI strategies for executives

Everyone is excited about AI, but few companies have successfully implemented it. While enthusiasm for generative AI (GenAI) has helped accelerate AI adoption across enterprises, the promises of artificial intelligence have yet to translate into measurable impact on most organizations’ bottom lines.
The trouble isn’t the tech — it’s a lack of executive ownership. Fewer than 30% of CEOs actively sponsor their company’s AI agenda,1 leaving most AI initiatives without the strategic alignment and support needed to scale.
The result? Nearly 90% of AI projects remain stuck in pilot mode.1 Often launched from the ground up to address specific business functions, these efforts struggle to break through organizational silos or demonstrate enterprise-wide value. It’s a negative feedback loop — businesses invest in AI, but without top-down support or holistic strategies, they don’t see meaningful operational or financial returns.
So, how can IT leaders break through this “AI paradox” and drive real, measurable outcomes? Let’s examine the GenAI strategies that have worked for us at Elastic. We worked across the business to address key challenges in customer support, employee productivity, and IT operations. See how we made three GenAI solutions that scale.
Enhancing customer experiences with Support Assistant
The business challenge: Ever-increasing threats
AI hasn’t just changed how businesses work; it has transformed what customers expect. They’re hyperindependent, with 81% of customers attempting to resolve issues themselves before reaching out to a live representative.2 When they do reach out, customers crave fast, efficient problem-solving. However, support engineers often spend significant time sifting through extensive documentation to meet the specific needs of every customer. The result: Time to resolution creeps up, and customer satisfaction trends down.
The solution: Support Assistant
Support Assistant is our generative AI chatbot experience that empowers support engineers to efficiently respond to customer support issues and enables customers to self-serve.
Our Support Assistant relies on a retrieval augmented generation (RAG) architecture, OpenAI’s GPT-4o large language model (LLM), and vector search capabilities to access a large bank of proprietary knowledge, including:
Internal documentation
- Customer case histories
- Resolved tickets
- CRM data
- White papers
- Educational materials
- Product defects
- Resolution logs
By using vector search-enabled semantic search, Support Assistant understands the intent behind a query and surfaces relevant results even when terminology varies. It can also synthesize information from multiple sources, summarize case threads, and suggest next steps.
Bottom line: Customers and support engineers have access to a conversational search tool that can offer contextual and relevant information when queried that results in faster triage, escalation, and resolution.

Executive support: GVP of customer support
Our customers and support team want to search, find, fix, and move on. To continue meeting their expectations of self-service support and easy access to knowledge, it was a no-brainer to invest in AI. Chris Blaisure, senior director of field technology, AI, and analytics at Elastic, and I started with a couple of brainstorming sessions with our brightest engineers and leaders. With general enthusiasm across the team and several search and GenAI experts, we easily built business requirements, and our technology team went into POC and build mode. It took us nine months from start to the launch of our first version to customers. It’s hard to imagine how we worked without it in the past.
Julie Baxter-Rudd, Global VP of Support, Elastic
Results: ROI within 4 months of launch
Built and tested at Elastic, Support Assistant delivered ROI in just four months by integrating generative AI directly into the workflows of both users and support engineers.
The impact was significant. Elastic saw a 6x increase in hard case deflections, meaning that more customers were able to find answers through Support Assistant and opted not to create a support ticket.
At the same time, the mean time to first response (MTFR) improved by 23%, allowing customers to get help faster. Additionally, there was a 7% reduction in assisted support case volume, further easing the load on the support team and highlighting the efficiency gains driven by generative AI.
6x increase in hard case deflections | 23% improvement in mean time to first response to customers | 7% reduction in assisted support case volume | 4-month payback period |
Improving employee productivity and efficiency with ElasticGPT
The business challenge: Too much data, not enough time
Organizations, ours included, continuously produce more data than they can manage, let alone effectively make use of. This constant influx makes it difficult for employees to quickly find relevant information scattered across various enterprise data sources and systems. Our workforce was struggling to quickly find information across our fragmented data sources. Compounding the issue, some of our company resources remained siloed and fragmented, resulting in outdated and unreliable information that undermined data accuracy and drove redundant employee requests.
The solution: ElasticGPT
ElasticGPT is an internal generative AI tool that helps our employees quickly find relevant information, boosting productivity. Powered by Elastic's Search AI Platform and deployed in Elastic Cloud, it uses a vector database, Elasticsearch, Elastic Observability, and enterprise connectors. ElasticGPT also uses RAG to provide reliable, up-to-date answers to our employees through different avenues, including the company intranet, a Slackbot, and an ElasticGPT page. Built on a robust data foundation and drawing from proprietary data, ElasticGPT offers users reliability and accuracy.
To complement the tool, we invested in a centralized infrastructure that offers the flexibility to maintain multiple generative AI experiences in one environment. This avoids future technical debt by scaling reliably to meet changing needs and promotes the use of generative AI across the company. Speaking of scalability, we continue to consume more enterprise systems, making more data and content available to our users.
To address the use of shadow AI, or unauthorized generative AI solutions, ElasticGPT has built-in access to multiple LLMs. If ElasticGPT determines the user's question is best answered with outside data, it will automatically pull from external sources. As a result, operations stay secure, and teams have access to the latest and greatest technology available.

C-suite support: CIO
Our entry into AI started through a series of hackathons that our IT leadership team initiated. It was important to get our teams thinking about our challenges in new ways, get excited about the prospect of AI, and see where the boundaries were. We quickly learned that barriers to entry would include a cultural shift as well as the technical challenges that accompany leading-edge technologies. At times, we needed to come back to the whiteboard and consider better ways to navigate our implementation. In addition to our AI center of excellence, which comprised a few AI experts, passionate AI hobbyists in key roles played a part in ensuring this effort took off.
Rusty Searle, Interim CIO, Elastic
The result: ROI within 2 months of launch
Within the first three months of its launch, ElasticGPT enabled faster and more intuitive discovery and search across proprietary data sources. As a result, employees reclaimed more than five hours per month, which totals about 63 hours per year.
Within two months of launch, the gain in efficiency offset the cost of running ElasticGPT with LLM hosting and building labor included.
Bonus: An internal survey reported a 98% employee satisfaction rate.
63 hours saved per employee per year | 2-month payback period |
Transforming security teams’ workflows with Elastic AI Assistant
The business challenge: Exponential growth in attack surfaces and attack volume
As the use of SaaS applications continues to increase, attack surfaces continue to grow. Meanwhile, new and sophisticated emerging threats are growing with the help of AI. With constantly moving parts, visibility is limited — and adapting quickly is difficult. The same goes for getting actionable intelligence. Estimating the potential impact and risk of a given threat in a given environment requires laser focus.
Traditional reporting methods no longer worked in the face of constant new threat reports from disparate sources. Manually collecting signals, parsing through documentation, and analyzing insights was time-consuming and inefficient when up against fast-moving and multiplying targets.
The solution: Elastic AI Assistant
Elastic AI Assistant for Security enhances security analyst expertise, increases efficiency, and reduces the manual workload involved in investigation and response — streamlining the entire threat intelligence reporting process.
To support this, our team developed markdown templates for each type of threat intelligence report and stored them in the AI Assistant’s knowledge base. This standardization enables consistent, real-time data collection from sources like threat feeds, security blogs, and incident reports. The AI Assistant then generates a first draft of each report, allowing analysts to focus on refining insights and assessing risk.
It also supports in-depth threat trend analysis to help identify patterns and anticipate emerging threats based on historical activity. This lets security teams understand the ever-evolving threat landscape for a more mature security posture.

C-suite support: CISO
When generative AI became mainstream, we knew threat actors would get more creative and increase the velocity of their attacks. We were already building proactive measures to secure our ecosystem. To manage the growing scale of the attack surface, we sought ways to quickly reduce the noise. With the limited resources we had for threat intelligence reporting, it was a no-brainer to use GenAI to accelerate analyst reporting. The build was an easy lift to enable GenAI to do the heavy lifting for us. With access to the AI Assistant within our Elastic Security solution, it has been much easier for me and our analysts to parse through various reports and make decisions on where to focus our energy.
Mandy Andress, CISO, Elastic
The result: Analysts recouped 75% of their time
Since implementing the AI Assistant, Elastic’s security team has reclaimed 75% of analysts’ time while increasing threat intelligence report output by 92%.
By streamlining manual processes, the AI Assistant empowers analysts to deliver broader, real-time threat insight coverage. With more time to focus on high-value tasks, such as comprehensive analyses, analysts can now dive deeper into understanding the relevance and impact of emerging threats within Elastic’s specific environment and take on a proactive security posture.
75% of analyst time reclaimed | 92% increase in threat intelligence reports output |
Boost business value with GenAI for executives
There are many high-yielding ways to use generative AI in your organization, including data synthesis, predictive analytics, interactive digital manuals, threat hunting, and more. At Elastic, our generative AI implementations succeeded because we:
Identified real, persistent problems in customer service, employee support, and security
Built vertical-specific AI experiences using a strong foundation of high-quality, enterprise-specific data
Embedded AI into workflows — not just interfaces — to deliver real value to the people using these tools
Avoided one-off solutions that only addressed single problems — we built with the future in mind by creating an ecosystem of AI models.
While many companies are still experimenting with copilots and chatbots, our GenAI use cases show how to move from pilot to production with a clear and measurable return on investment in AI.
Most importantly, we were able to move these use cases from pilot to production because they were championed by IT decision-makers and executives.
Sources
1. McKinsey, “Seizing the agentic AI advantage,” June 13, 2025.
2. Harvard Business Review, “Kick-Ass Customer Service,” 2017.
The release and timing of any features or functionality described in this post remain at Elastic's sole discretion. Any features or functionality not currently available may not be delivered on time or at all.
In this blog post, we may have used or referred to third party generative AI tools, which are owned and operated by their respective owners. Elastic does not have any control over the third party tools and we have no responsibility or liability for their content, operation or use, nor for any loss or damage that may arise from your use of such tools. Please exercise caution when using AI tools with personal, sensitive or confidential information. Any data you submit may be used for AI training or other purposes. There is no guarantee that information you provide will be kept secure or confidential. You should familiarize yourself with the privacy practices and terms of use of any generative AI tools prior to use.
Elastic, Elasticsearch, and associated marks are trademarks, logos or registered trademarks of Elasticsearch N.V. in the United States and other countries. All other company and product names are trademarks, logos or registered trademarks of their respective owners.