The Changelog

Release notes for Ayraa upgrades, as well as news, vision, and thoughts throughout her journey.

The Changelog

Release Notes: May 1 - May 11, 2025

Release Notes: May 1 - May 11, 2025

Overview

This release introduces a series of foundational upgrades designed to make your experience faster, smarter, and more intuitive. From smarter query understanding and end-to-end search across Google Drive and Box to major performance improvements and robust bug fixes—this update lays the groundwork for deep, reliable, and scalable knowledge work.


New Features

Deep Research for Google Drive in Workflows

You can now directly search and reference Google Drive content—docs, slides, spreadsheets—within workflows. Both semantic and keyword search are supported, so whether you remember the filename or just the topic, Ayraa will surface the right document. Use cases include generating reports, building sales summaries, or sourcing prior research—without toggling tools.

Deep Research support for Box.com in Workflows

Teams using Box can now pull relevant content into workflows without manual switching. This includes files, folders, and nested documents.
Box support unlocks unified workflows for teams collaborating across multiple storage systems, streamlining search and reducing redundancy.

Deep Research support in Jira History in Workflows

You can now access the entire change log of any Jira issue via Deep Research. From initial creation to final closure, this timeline tracks status updates, assignee handoffs, comments, and QA checkpoints. This is critical for retrospectives, compliance, and understanding dev team velocity over time.


Enhancements

Natural Language Time Understanding

Ayraa now understands time the way you speak it. Whether you're searching for “Slack discussion from Sunday” or “Meeting with John from last week,” the platform intelligently parses and scopes results across your workspace. No filters or rigid formats needed—just ask as you would in conversation.

Enhanced Jira Query Accuracy

Jira results are now smarter, faster, and more contextually accurate. We've improved how complex queries are parsed, scored, and ranked—so users working across multiple boards, projects, or teams get the right results every time.

Index Visibility Transparency

Search results now clearly show how far back your connected apps are indexed. This helps teams understand what data is searchable—and prevents confusion when older documents don’t appear in results.

Interface Refinements

  • Alphabetical Connector Sorting: Easier navigation when managing many app integrations.
  • Meetings App Visibility: Now more prominently displayed in app selector.
  • Consistent Drive Naming: Google Drive is now labeled uniformly across all touchpoints.

Bug Fixes

Workflows & Reports

  • Resolved inconsistent results in Box and Drive workflows
  • Fixed broken links and formatting issues in Jira Deep Research reports
  • Improved reliability of Jira history reports

User Interface

  • Corrected broken icons and special character formatting in collections
  • Fixed display glitches in Assist, especially with ClickUp references
  • Addressed alignment and hover states for interactive tooltips and cards

Admin & Backend

  • Resolved issues with app connectors remaining active after being disabled
  • Fixed disappearing collection folders after creation
  • Ensured stability of Google Drive tools when editing existing workflows

Performance Improvements

Warm Cache System

We’ve introduced a warm-cache model that keeps your workspace intelligently “pre-loaded.” This significantly reduces loading times for common tasks and ensures up-to-date results without stale data.

Slack Response Latency Fixes

We fixed a known delay in Slack assistant conversations, especially for direct messages and multi-turn interactions. Replies are now significantly faster and more stable.


Security Updates

Our updated link architecture now offers improved tenant isolation and data protection. Shared links are scoped accurately and protected with stronger permission boundaries—ensuring your data stays private, even when content is shared across teams.


Closing
The platform is becoming truly fun to build. We hope you are enjoying these updates as much as we are in working on them!

P.S. Most of this was written using a Deep Research Workflow template for Release Notes.

RAG is Dead. Long Live RAG.

RAG is Dead. Long Live RAG.

Why ultra-large context windows won’t replace retrieval (and how retrieval-augmented generation is evolving).

The “RAG is Dead” Argument

Every few months, a new leap in large language models triggers a wave of excitement – and the premature obituary of Retrieval-Augmented Generation (RAG). The latest example: models boasting multi-million-token context windows. Google’s Gemini model, for instance, now offers up to a 2 million token prompt, and Meta’s next LLM is rumored to hit 10 million. That’s enough to stuff entire libraries of text into a single query. Enthusiasts argue that if you can just load all your data into the prompt, who needs retrieval? Why bother with vector databases and search indices when the model can theoretically “see” everything at once?

It’s an appealing idea: give the AI all the information and let it figure it out. No more chunking documents, no more relevance ranking – just one giant context. This argument crops up regularly. RAG has been declared dead at every milestone: 100K-token models, 1M-token models, and so on. And indeed, with a 10M-token window able to hold over 13,000 pages of text in one, it feels as though we’re approaching a point where the model’s “immediate memory” could encompass an entire corporate knowledge base. Why not simply pour the whole knowledge base into the prompt and ask your question?

But as with many things in technology, the reality is more complicated. Like a lot of “this changes everything” moments, there are hidden trade-offs. Let’s examine why the reports of RAG’s death are – as Mark Twain might say – greatly exaggerated.

The Scale Problem: Context ≠ Knowledge Base

A key premise of the “just use a bigger context” argument is that all relevant knowledge can fit in the context window. In practice, even ultra-long contexts are a drop in the bucket compared to the scale of real-world data. Enterprise knowledge isn’t measured in tokens; it’s measured in gigabytes or terabytes. Even a 10M-token context (which, remember, is fantastically large by today’s standards) represents a tiny fraction of an average company’s documents and data. One analysis of real company knowledge bases found that most exceeded 10M tokens by an order of magnitude, and the largest were nearly 1000× larger. In other words, for a 10 million token window, some organizations would need a 10 billion token window to load everything – and tomorrow it will be even more.

It’s the age-old story: as memory grows, so does data. No matter how large context windows get, knowledge bases will likely grow faster (just as our storage drives always outpace our RAM). That means you’ll always face a filtering problem. Even if you could indiscriminately dump a huge trove of data into the model, you would be showing it only a slice of what you have. Unless that slice is intelligently chosen, you risk omitting what’s important.

Crucially, bigger context is not the same as better understanding. We humans don’t try to read an entire encyclopedia every time we answer a question – we narrow our focus. Likewise, an LLM with a massive buffer still benefits from guidance on where to look. Claiming large contexts make retrieval obsolete is like saying we don’t need hard drives because RAM is enough. A large memory alone doesn’t solve the problem of finding the right information at the right time.

Diminishing Returns of Long Contexts (The “Context Cliff”)

Another overlooked issue is what we might call the context cliff – the way model performance degrades as you approach those lofty context limits. Just because an LLM can take in millions of tokens format-wise doesn’t mean it can use all that information effectively. In fact, research shows that models struggle long before they hit the theoretical max. A recent benchmark by the NoLiMa Study by Cornell University (1) designed to truly test long-context reasoning (beyond trivial keyword matching) found that by the time you feed a model 32,000 tokens of text, its accuracy in pulling out the right details had plummeted – dropping below 50% for all tested models. Many models start losing the thread with even a few thousand tokens of distraction in the middle of the prompt.

This “lost in the middle” effect isn’t just a rumor; it’s been documented in multiple studies. Models tend to do best when relevant information is at the very beginning or end of their context, and they often miss details buried in the middle. So, if you cram 500 pages of data hoping the answer is somewhere in there, you might find the model conveniently answered using something from page 1 and ignore page 250 entirely. The upshot: ultra-long inputs yield diminishing returns. Beyond a certain point, adding more context can actually confuse the model or dilute its focus, rather than improve answers.

In real deployments, this means that giving an LLM everything plus the kitchen sink often works worse than giving it a well-chosen summary or snippet. Practitioners have noticed that for most tasks, a smaller context with highly relevant info beats a huge context of raw data. Retrieval isn’t just a clever trick to overcome old 4K token limits – it’s a way of avoiding overwhelming the model with irrelevant text. Even the latest long-context models “still fail to utilize information from the middle portions” of very long texts effectively. In plain terms: the larger the context, the fuzzier the model’s attention within it.

The Latency and Cost of a Token Avalanche

Let’s suppose, despite the above, that you do want to stuff a million tokens into your prompt. There’s another problem: someone has to pay the bill – and wait for the answer. Loading everything into context is brutally expensive and slow. Language models don’t magically absorb more text without a cost; processing scales roughly linearly with input length (if not worse). 

In practical terms, gigantic contexts can introduce latency measured in tens of seconds or more. Users have reported that using a few hundred thousand tokens in a prompt (well under the max) led to 30+ second response times, and up to a full minute at around 600K tokens. Pushing toward millions of tokens often isn’t even feasible on today’s GPUs without specialized infrastructure. On the flip side, a system using retrieval to grab a handful of relevant paragraphs can often respond in a second or two, since the model is only reasoning over, say, a few thousand tokens of actual prompt. That’s the difference between a snappy interactive AI and one that feels like it’s back on dial-up.

Then there’s cost. Running these monster prompts will burn a hole in your wallet. Even if costs fall over time, inefficiency is inefficiency. Why force the model to read the entire haystack when it just needs the needle? It’s like paying a team of researchers to read every book in a library when you have the call number of the one book you actually need. Sure, they might find the answer eventually – but you’ve wasted a lot of time and money along the way. As a contextual AI expert put it, do you read an entire textbook every time you need to answer a question? Of course not!

In user-facing applications, these delays and costs aren’t just minor annoyances – they can be deal-breakers. No customer or employee wants to wait 30 seconds for an answer that might be right. And no business wants to foot a massive cloud bill for an AI that insists on reading everything every time. 

Adding only the information you need, when you need it, is simply more efficient.

Training vs. Context: The Limits of “Just Knowing It”

Some might argue: if long contexts are troublesome, why not just train the model on the entire knowledge base? After all, modern LLMs were trained on trillions of tokens of text – maybe the model already knows a lot of our data in its parameters. Indeed, part of the allure of very large models is their parametric memory: they’ve seen so much that perhaps the factoid or document you need is buried somewhere in those weights. Does that make retrieval redundant?

Not really. There’s a fundamental distinction between what an AI model has absorbed during training and what it can access during inference. Think of training as the model’s long-term reading phase – it’s seen a lot, but that knowledge is compressed and not readily searchable. At inference time (when you prompt it), the model has a limited “attention span” – even 10 million tokens, in the best case – and a mandate to produce an answer quickly. It can’t scroll through its training data on demand; it can only draw on what it implicitly remembers and what you explicitly provide in the prompt. And as we’ve seen, that implicit memory can be fuzzy or outdated. Yes, the model might have read a particular document during training, but will it recall the specific details you need without any cues? Often, no. It might instead hallucinate or generalize, especially if the info wasn’t prominent or has since changed.

This is why RAG was conceived in the first place – to bridge the gap between a model’s general training and the specific, current knowledge we need at query time. RAG extends a model’s effective knowledge by fetching relevant snippets from external sources and feeding them in when you ask a question. It’s a bit like giving an open-book exam to a student: the student might have studied everything, but having the textbook open to the right page makes it far more likely they’ll get the answer right (and show their work). With RAG, the language model doesn’t have to rely on the hazy depths of its memory; it can look at the exact data you care about, right now. This not only improves accuracy but also helps with issues like hallucination – the model is less tempted to make something up if the source material is right in front of it.

Moreover, enterprise data is often private, proprietary, and constantly changing. We can’t realistically pre-train or fine-tune a giant model from scratch every time our internal wiki updates or a new batch of customer emails comes in. Even if we could, we’d still face the inference-time limits on attention. The model might “know” the latest sales figures after fine-tuning, but unless those figures are somehow prompted, it might not regurgitate the exact number correctly. Retrieval lets us offload detailed or dynamic knowledge to an external store and selectively pull it in as needed. It’s the best of both worlds: the model handles the general language and reasoning, and the retrieval step handles the targeted facts and context.

Finally, there’s an important practical concern: permission and security. If you naively dump an entire company’s data into a prompt, you risk exposing information to the model (and thus to users) that they shouldn’t see. In a large organization, not everyone can access all documents. RAG systems, by design, can enforce access controls – retrieving only the content the user is allowed to know. In contrast, a monolithic prompt that contains “everything” can’t easily disentangle who should see what once it’s in the model’s context. This is especially vital in domains like finance or healthcare with strict data governance. In short, retrieval acts as a gatekeeper, ensuring the AI’s knowledge use is not just relevant, but also compliant with rules and roles.

RAG Is Evolving, Not Dying

All this isn’t to say long context windows are useless or that we shouldn’t celebrate larger memory in our models. They are a genuine breakthrough, and they will enable new capabilities – we can give our AIs more background and sustain longer dialogues now. But rather than eliminating the need for retrieval, these advances will augment and transform it. The smartest systems will use both a big context and retrieval, each for what it’s best at. It’s not a binary choice. As one AI leader put it, we don’t need to choose between RAG and long contexts any more than we must choose between having RAM and having a hard drive – any robust computer uses both.

In fact, RAG is likely to become more integrated and nuanced in the future, not less. The naive version of RAG – “search and stuff some text chunks blindly into the prompt” – may fade, but it will be replaced by smarter retrieval paradigms that work hand-in-hand with the model’s training and reasoning. Future retrieval-augmented systems will be:

  • Task-aware and context-sensitive: Rather than retrieving text in a vacuum, they’ll understand what the user or application is trying to do. They might fetch different kinds of information if you’re writing an email vs. debugging code vs. analyzing a contract. They’ll also leverage the model’s improved ability to handle longer context by retrieving richer, more relevant packs of information (but still only what’s needed). In essence, retrieval will become more intelligent curation than brute-force search.
  • Secure and personalized: As discussed, retrieval will respect user permissions and roles, acting as an intelligent filter. It might maintain a “five-year cache” of knowledge for an employee – the documents and data most relevant to their job from the past few years – so that common queries are answered from that cache almost instantly. Meanwhile, less frequently needed or older information can be fetched on demand from deeper storage. By tailoring what is readily accessible (and to whom), RAG can provide fast access to the right slice of knowledge for each scenario, without ever exposing things a user shouldn’t see.
  • Cost-efficient and balanced: We’ll see systems strike a balance between brute-force ingestion and selective retrieval. If (or when) context windows expand even further, RAG techniques might shift to feeding the model a pre-organized dossier of relevant information, rather than a hodgepodge of raw text. That is, retrieval might pre-digest the data (through summarization or indexing) so that even a large context is used optimally. The endgame is that each token the model sees is likely to be useful. This keeps token costs down and latency low, even if the “raw” available data grows without bound. RAG will also work in tandem with model fine-tuning: if there are pieces of knowledge every user will need often, those can be baked into the model weights or prompt defaults, while the long tail of specific info remains handled by retrieval.

In short, RAG isn’t dying – it’s maturing. We’ll probably stop thinking of “RAG” as a separate module and see it become a seamless part of how AI systems operate, much like caching and indexing are just a normal part of database systems. The next time someone confidently pronounces “RAG is dead,” remember that we’ve heard that before. Each time, we later discover that retrieval remains essential – it just adapts to the new landscape. As long as we have more data than we can cram into a model’s head at once (which will be true for the foreseeable future), we’ll need mechanisms to choose what to focus on.

The future will belong to those who master both aspects: building models that leverage large contexts and designing retrieval that makes those contexts count. The tools and terminology may evolve (maybe we’ll call it “context orchestration” or something else), but the underlying principle – that targeted information access matters – will hold. Far from being a relic of the past, RAG may be the key to making these ever-more-powerful models actually useful in the real world.

After all, it’s not about how much information you can shove into a prompt – it’s about giving the right information to the model at the right time.

And that is a problem we’ll be solving for a long time to come.

(1) https://arxiv.org/abs/2502.05167

Release Notes: April 14 - April 28, 2025

Release Notes: April 14 - April 28, 2025

Overview

This release period introduces several exciting new features and improvements, including our groundbreaking Deep Research Workflows feature, a new ClickUp integration, enhanced support for Salesforce Cases, and significant improvements to search, assist, and follow-up query functionality. We've also made numerous bug fixes and performance enhancements to provide a smoother, more reliable experience throughout the platform.

New Features

Deep Research Workflows

Deep Research Workflows revolutionizes how you extract insights from your workplace data. This powerful new feature allows you to create, schedule, and run sophisticated research workflows that automatically generate comprehensive reports from your integrated apps. With DRW, you can:

  • Set specific instructions and let AI handle complex multi-step research tasks
  • Schedule workflows to run automatically at your preferred times
  • Customize visibility settings for seamless team collaboration
  • Select specific app connectors to include in your research

From automated release notes and JIRA activity reports to sales analytics and workspace recaps, DRW enables workflows you never thought possible. This feature represents a significant advancement in research capabilities, allowing you to focus on deep work while automating routine information gathering.

ClickUp Integration

Our new ClickUp integration enables seamless search across all your ClickUp tasks, docs, and projects. This connector covers task names, descriptions, assignees, reporters, statuses, tags, comments, due dates, and associated documents. Designed for product management, operations, project management, engineering, and marketing teams, this integration brings clarity and speed to your workflows by unifying ClickUp data within your enterprise search experience.

Salesforce Cases Support

You can now query Salesforce Cases related to opportunities, accounts, and contacts. This enhancement is particularly valuable for customer service and sales teams who need to quickly find information across their Salesforce Cases ecosystem. The integration allows for natural language queries and delivers comprehensive results with proper formatting and context.

Multi-message Responses in Slack

The Ayraa Slack bot now supports multi-message responses, breaking longer responses into multiple messages when necessary. This ensures that all information—including references—is properly displayed, even for detailed queries that exceed Slack's character limits. No more truncated responses or missing references!

Major Improvements

Search and Assist Enhancements

  • More Concise Search Responses: Search responses are now more concise and to the point, with a clear indication to use Assist for more interactive experiences
  • Better AI Model: Upgraded to the latest GPT-4.1 model for improved response quality and accuracy
  • Automatic Recovery from AI Glitches: Added automatic retry mechanism to handle occasional blank responses from AI
  • Custom Slack Messaging: Slack at-ayraa messaging is now tailored based on which apps you have connected
  • Improved JIRA Query Accuracy: Complex JIRA queries now return more accurate and helpful results

Follow-up Query Improvements

  • Context Preservation: Fixed issues where follow-ups incorrectly included data from old sessions
  • Better Context Understanding: Follow-up queries now better understand the context from earlier related queries
  • Extended Related Information: Improved ability to find related information across sessions
  • Enhanced External Knowledge Integration: Fixed confusion between external and workspace knowledge in follow-ups

Meeting Features

  • Email Sharing for Meeting Summaries: You can now share meeting summaries via email, making it easier to distribute important information to participants
  • Team Sharing Improvements: Enhanced ability to share meeting transcripts with teams including the "All" team
  • Ad-hoc Meeting Transcripts: Hosts now properly receive transcripts for ad-hoc meetings

Stalled Thread and Discover Improvements

  • Weekend-Aware Stalled Detection: Stalled thread detection now intelligently skips weekend hours
  • Cross-Timezone Support: Improved timing calibration for stalled detection across different time zones
  • More Reliable Discover Scribes: Fixed discover automation for consistent scribe creation
  • Smarter Thread Detection: Improved accuracy of thread status detection to reduce false-positive stalled notifications

User Experience Improvements

  • Streamlined Signup Flow: Simplified email field during signup
  • Gmail Sign-up Support: Enabled Gmail sign-up to improve accessibility
  • Profile Picture Integration: Improved auto-fetching of profile pictures from Google/Microsoft accounts
  • Enhanced Integration Pages: Improved responsiveness and clarity of integration success, failure, and cancel pages
  • Go Links Improvements: Enhanced Go Links search text and functionality
  • Dialog Improvements: Fixed visual issues with dialog pop-ups

Bug Fixes

Search and Assist Fixes

  • Fixed opportunity card and hover references display in the Anytime filter
  • Resolved collections search issues with the Anytime filter
  • Fixed missing AI summaries for certain Jira status-based searches
  • Improved Anytime search response time
  • Fixed HTML tags appearing in reports
  • Resolved 'exception occurred' error for specific Slack results
  • Fixed double confidence scores for collection follow-ups
  • Fixed empty reference sections in At-Ayraa queries

Collections and Documents Fixes

  • Fixed "exception occurred" error for existing cards/files in folders
  • Resolved issues when adding subject matter experts to team-shared collection folders
  • Fixed team selection in sharing options during new collection creation
  • Improved scrolling functionality for the collections page
  • Fixed search functionality for collection cards by username

Meetings and Collaboration Fixes

  • Fixed mixed-up icons on the meetings page
  • Fixed meeting retention setting changes from months to days
  • Resolved user email persistence issues in the "share with" field across meeting summaries
  • Fixed issues with icons not loading correctly

User Interface Fixes

  • Fixed User Pilot flow issues during Go link creation
  • Fixed app connectors incorrectly showing as disabled in individual mode
  • Fixed the flow after creating a collection folder
  • Improved admin visibility of users in new workspaces
  • Fixed missing invitation emails when adding users
  • Fixed Go Links alignment issues

Performance Enhancements

  • Faster Queries: Removed unnecessary processing for multi-keyword queries, saving 1000-1500ms per query
  • Smarter UI Loading: Implemented optimized loading of UI components for faster page rendering
  • Enhanced Semantic Search: Improved accuracy and performance of semantic search functionality
  • Streamlined Response Generation: Optimized the response pipeline for follow-up queries

Cached Content Utilization: Improved performance by using cached content for discovery and stalled thread detection

The Future of Workplace Search Has Arrived - Deep Research Workflows by Ayraa

The Future of Workplace Search Has Arrived - Deep Research Workflows by Ayraa

The modern workday is defined by motion without momentum. Teams move across Slack, Confluence, JIRA, Salesforce, Notion, Drive, and Gmail—chasing context, piecing together updates, and stitching progress manually from fragmented information. The cost is not just measured in hours lost; it is a deeper erosion of focus, creativity, and forward momentum.

Ayraa envisions a better foundation for modern work: a system that continuously defragments & organizes your workspace for you through pre-defined workflows.

The Workflows app

We're excited to introduce Deep Research Workflows, autonomous, intelligent workflows that continuously transform scattered knowledge - from chats, emails, tickets, and documents - into structured synopses custom-prepared for you and delivered precisely where and when you need them.

Workflows That Think Forward

A Deep Research Workflow is a persistent, self-operating system built to mirror how employees compile information from scattered sources but now powered by a reasoning AI model that can search, collate, and synthesize knowledge on your behalf. You describe the outcome you need—whether a weekly team recap, a sales pipeline report, or a stakeholder newsletter based on workspace activity—and Ayraa orchestrates the steps to deliver it autonomously.

Exit no-code flowcharts. Enter natural language.

The process is simple: you define the goal once in natural language. Ayraa unpacks it into logical actions and reasons across your workspace in real-time and carries the work forward on a schedule you control. There are no fragile automations to maintain, no brittle triggers to fix. The system runs reliably across Slack, JIRA, Confluence, Salesforce, Notion, Drive, and Gmail, adapting to changes and surfacing the most critical insights automatically.

It's research that doesn't just happen once — it happens reliably, rhythmically, and intelligently.

How It Works Behind the Scenes

Designing an Ayraa workflow is as simple as writing out what you want. There's no need to draw flowcharts or write any code. You speak your workflow into existence by giving the AI a detailed description of the task, just like you would explain it to a colleague. Because the system uses advanced reasoning models that understand complex instructions, you can be very specific and nuanced in your request. Ayraa will understand your intent and figure out how to execute it step by step.

Objective

You are an excellent AI assistant adept at following Slack threads and understanding if someone is waiting on a person for a response.

Please search Slack with the approach detailed below and capture summaries of all places where people are waiting on me. Please also share the excerpt of what exactly was asked, and your 1-2 sentence summary of what is needed from me based on the overall context.

Here's a sample workflow excerpt for a template that reads your entire Slack activity and finds any places where you have someone waiting for an action or task.

Search steps
The way to search Slack would be to look at the time period selected (use last 24 hours if nothing is selected) and search for the name [insert your name here] (my name) in the Slack search. For example, if today is Apr 4th 2025, then you would search for after:2025-04-02 John to mean all messages that had the word "John" in it and that were posted after April 2nd – which includes Apr 4th (today) and to be safe, all of Apr 3rd as well.
Then please extract the threads where you found these matches, and analyze those threads to figure out what is needed from me (as explained in the objective).
Reporting
Please create a report with an Executive summary, and then a list of topics (clear headlines for each thread you found where just reading the title allows me to know what the thread/topic was about - use simple conversational English and not a "word salad" that is hard to follow.

Under each topic then, provide a summary of what is needed from me with an excerpt of the exact message and who sent it.

At the end, have a list of references cited above with Slack hyperlinks to each thread where I am needed.

This natural language approach means anyone can set up an advanced workflow without technical barriers. Suppose you set a workflow to "Summarize the week's product updates every Friday at 5pm." In that case, Ayraa will:

  • scan JIRA for features implemented, bugs fixed, and any launches,
  • extract the top themes and keywords from these,
  • then pull high-signal conversations from Slack on these topics and keywords,
  • scrub, score, and organize the content extracted from JIRA & Slack, synthesizing insights & gathering summaries,
  • format polished release notes,
  • and send them directly to your team's channel or inbox — without you lifting a finger.

And it does this every week without needing reminders, re-prompts, or maintenance.

The knowledge synthesis goes beyond surface-level scraping. The reasoning model creates well-thought-out plans, and the autonomous execution allows the AI to run for a long time and go deep in your workspace - iterating, if needed.

24/7 Deep Research on Autopilot

Once you define a workflow, Ayraa’s agent runs with it on autopilot.

Multi-step execution with advanced reasoning

Think of it as a tireless digital researcher that works around the clock on your behalf. Need a competitive analysis or an incident report first thing in the morning? The agent can be working on it overnight. These workflows introduce always-on helper agents to your workspace that can easily 10× your productivity by handling work continuously across limitless use cases.

Importantly, the agent doesn’t just run once – it can be set to run whenever needed or even continuously watch for new information. You don’t have to babysit it. After you hit “go,” the AI will autonomously carry out the task from start to finish. It’s like putting your research on cruise control: the heavy lifting happens in the background, without constant oversight. By the time you check in, the research is done and neatly packaged. This frees you from the constant juggle between tools and tasks, effectively eliminating a lot of busywork.

Fine-Grained App Control and Integration

Ayraa workflows operate across all your work apps – but you remain in control of where and how they search for information. With each workflow, you can specify exactly which applications or data sources the AI should use. For example, you might direct an agent to pull data only from your project management tool and database, or to search both your Slack messages and Google Drive documents. You can also provide per-app instructions to guide the agent: for instance, tell it to look in a particular Slack channel, or to ignore older documents in Confluence.

Control how the agent uses your apps

Under the hood, the AI translates your natural language instructions into targeted queries for each app. It knows how to use the APIs and search capabilities of tools like Slack, Gmail, Jira, or Salesforce to find relevant data. By controlling the app scope, you ensure the agent has access to the right context and nothing extraneous. This fine-grained control means the workflow’s output is both relevant and compliant with any data boundaries you have. In short, you get the benefits of deep integration with your tech stack without handing over the steering wheel entirely – you set the boundaries, and the AI executes within them.

Multi-Step Reasoning and Execution

Complex tasks often require multiple steps and careful reasoning, and Ayraa’s agents are built to handle exactly that. When a workflow runs, the AI doesn’t just do one simple search and stop. It mimics human-like reasoning: it will think through the task, break it down into sub-tasks, and plan a sequence of actions to achieve the end goal. After each step, the agent pauses to reflect on the results it just got. It checks whether those results are relevant and decides what to do next. This might involve refining a search query, looking up a definition, or branching out to another data source – whatever the plan requires.

This iterative, multi-step approach means the agent can tackle complex research questions that a single query can’t solve. It can gather information from multiple places, cross-reference facts, and adjust its strategy if new information changes the picture. And it does all of this tirelessly, without rushing or skipping steps. Workflows can run for minutes or even hours, methodically going through the plan just like a diligent human researcher would. The result is a thorough job: by the end, the AI has compiled and synthesized information from many sources, having effectively “thought through” the problem in a logical way.

Collaboration and Template Sharing

Ayraa makes it easy to get started with workflows by providing dozens of templates for common use cases. You don’t have to start from scratch if you don’t want to. For example, there might be a template for “Weekly Sales Report” or “Product Release Notes” already available. You can pick a template that’s close to what you need and then customize it to fit your specific requirements. Every workflow can be tweaked – you can add or remove steps, change the data sources, or refine the instructions – so the outcome is exactly right for you.

Collaborate & share workflows

Once you have a workflow that works well, you can share it with your team. Collaborate and share workflows just as you would share a document or script. Your teammates can run the same workflow or further modify it for their needs. This means best practices in your organization can spread quickly. If one person figures out a great automated research process (like a perfect weekly engineering recap), everyone else can benefit from it in just a few clicks. The result is a more streamlined operation across the board, as people aren’t reinventing the wheel for recurring tasks.

Scheduling and Automation of Recurring Tasks

One of the most powerful features of Deep Research Workflows is the ability to schedule them to run automatically. You can set up a workflow to execute at a specific time or on a recurring schedule – for example, every day at 7:00 AM or every Friday afternoon. This is ideal for routine reports and ongoing research tasks. Ayraa gives you precise control over when a workflow runs, so the results are ready exactly when you need them, without manual intervention.

Automate your workspace recap & research

Consider the advantage of this scheduling: your Monday morning team summary can be prepared by 6:00 AM Monday, waiting in your inbox when you start your day. A nightly operations health check can run at midnight and Slack you the highlights by the time you wake up. Because the agent is truly 24/7, it doesn’t matter if these tasks need to run outside of working hours. You’ll get the benefit of up-to-date information delivered on your schedule. In short, automation plus scheduling means important insights are never late or forgotten – they arrive like clockwork.

Detailed Reports with Custom Formatting

Detailed reports with custom formatting

The end product of a deep research workflow is typically a detailed report or summary, and Ayraa ensures those reports are polished and easy to read. The AI can output findings in a structured format that you define ahead of time. You might want bullet points, tables of key data, or a narrative summary with section headings – these can be configured as part of the workflow. With pre-configured formatting guidelines, every report comes out consistent and professional-looking without any extra effort.

What’s more, Ayraa’s agents do more than just compile raw data – they highlight the insights that matter. Because the AI is doing reasoning along the way, it can include context and explanations in the report, not just dump figures. For example, instead of only listing sales numbers, a report might add, “Sales increased 5% this week, likely due to the launch of Project X,” if it found that insight during its analysis. The workflow essentially writes the report for you, following your formatting preferences and injecting automated insights. This means you get immediate value from the results – they’re presentation-ready and often come with the story behind the data, not just the data itself.

Delivered where you work

Having a great report isn’t useful if you forget to check it, so Ayraa makes delivery convenient. The platform delivers your workflow results wherever you already work – that could be your email inbox, a Slack channel or direct message, or within the Ayraa app itself. You choose the delivery method that fits your routine. For instance, you might set a competitive intelligence report to be emailed to you and your team lead, while a daily engineering summary could be posted to a private Slack channel every evening.

This multi-channel delivery turns Ayraa into something like a 24/7 executive assistant. The information you need finds you, rather than you having to go look for it. If you live in Slack, you’ll see the updates right alongside your other conversations. If you prefer email, the report will show up as a nicely formatted message. And for deeper dives, you can always view the full details in Ayraa’s app. The key point is that the insights are integrated into your normal workflow and tools, so staying informed becomes effortless.

How Teams Are Using Workflows

Ayraa’s Deep Research Workflows are versatile and can be applied in virtually any domain. Here are a few concrete examples of how different teams use these AI-powered workflows:

  • Engineering Reports: Engineering managers can automate daily or weekly status reports. For example, a workflow can gather updates from source code repositories, issue trackers, and team chat. The agent might list new code changes, highlight completed tickets, and flag any blockers. By morning, the entire engineering team has a synthesized report of what happened in development – without anyone manually compiling it.
  • Release Notes Generation: Product teams often spend time writing release notes for new features or updates. Ayraa can handle this by collating information from commit messages, pull request descriptions, and project management boards. A workflow can automatically produce draft release notes whenever a new version is ready, complete with a list of new features, improvements, and bug fixes – all formatted and ready to review or publish.
  • Sales Signals and Insights: Sales teams can set up agents to watch for important signals, like big deals moving through the pipeline or notable customer activities. A workflow might monitor the CRM for any high-value opportunities that changed status, scan inbound emails for key customer inquiries, and check news sources for mentions of target accounts. It then delivers a daily brief highlighting things like “Lead X became a qualified opportunity” or “Client Y was mentioned in the news today,” giving the sales team actionable intelligence without anyone digging through data.
  • Operations Recaps: Operations and executive teams benefit from regular summaries of business health. An operations recap workflow can pull key metrics from various systems – finance software, inventory databases, support ticket logs, uptime monitors, etc. – and compile them into a concise overview. The AI might note, for instance, that “Customer support tickets dropped 10% this week” or “Inventory levels are within normal range.” These recaps ensure leadership is always up to speed on the state of the business, and they’re generated automatically at whatever interval makes sense (daily, weekly, monthly).

Each of these use cases demonstrates the core value of Ayraa’s Deep Research Workflows: time-consuming information gathering and analysis can be delegated to an intelligent agent. Whether it’s an engineer, a salesperson, or an executive, everyone gets to reclaim time and make better decisions because the right information is delivered to them with minimal effort.

Wherever momentum depends on connected knowledge, Ayraa's Workflows quietly take the weight.

A New Paradigm in Workspace Knowledge Management

While modern workspaces are scattered and disorganized, Ayraa’s Deep Research Workflows bring structure that lasts. Set them once, and they run on a schedule—continuously organizing and compressing your digital workspace into clear, distilled reports and insights.

Instead of digging through chat threads or running manual searches, you define the outcome once. Ayraa handles the rest—pulling data from across your tools, synthesizing it, and delivering exactly what you need.

Ayraa takes in all the raw data and pulls out what’s actually useful—clear, timely insights that help you move forward.

The result? Knowledge that’s not just organized—but always ready, always relevant, and always waiting for you.

What Comes Next

We’re not stopping with workflows. We’re building an entire ecosystem of plug-and-play use cases—modeled on how real teams actually operate. From product sprints to pipeline reviews, leadership recaps to customer intelligence, every workflow is designed to save time, reduce noise, and shift how work gets done.

Soon, this won’t feel like a feature. It’ll feel like the default.

We believe this is the future of work: proactive, composed, and quietly intelligent.
And Ayraa brings that future to you—now.

Welcome to a workday that works for you.

Ayraa Product Updates Mar 15 - Mar 27, 2025

Ayraa Product Updates Mar 15 - Mar 27, 2025

New Features
Add Confluence Search Tool: – Introduces an integrated Confluence search capability that uses a hybrid approach (native API plus RM semantic search) to deliver more comprehensive content retrieval.
Add Meetings Tool (searching over meeting transcripts) – Adds a new capability for searching through meeting transcripts so that users can quickly retrieve discussion details; note that this feature remains blocked by a pending dependency in the backend integration but has been flagged as a new feature.

Enhancements

  • Deep Research Workflow
    Remove "work" from email prompts – The UI text ("Enter your work email") has been updated by removing the term "work" to ensure clarity and consistency during signup and across related screens.
    Decouple Workflow Save from Execution – Workflow creation now allows users to save without triggering immediate execution, thereby aligning with user expectations and design guidelines.
    Personalize Workflow Reports – Workflow report visibility has been revised so that reports are always tied to the user who triggered the workflow rather than being shared broadly.

Revise Critical Scan Query – The critical pricing scan query has been updated to use a non-collection approach, improving automation reliability when handling pricing structure queries.
Integrate LLM-Based Filtering – Enhanced connector search functionality now uses LLM-based filtering (especially for Slack), which reduces irrelevant noise and improves overall result quality.

+4 other minor enhancements

Bug Fixes
A broad range of bug fixes has been implemented to address issues across workflows, reporting, and integrations:

Address Pagination Issue in Workflow Tiles – Increased the data limit (from 10 to 100) so that users can scroll to view additional workflow tiles without a design rework.
Resolve Multiple UI/UX Issues in Workflow Pop-Up – Improvements include matching button sizes, proper spacing (10px gaps), and reduced font size in certain areas for better consistency.
Fix Mobile and Web Email Formatting for Workflow Reports – Email presentation has been refined to align with Figma designs across devices.
Ensure Correct data Appears in Workflow Reports – Adjustments to the JQL query have resolved issues where workflows were previously returning no data or references.
Address Ad Hoc Meeting Summary Delivery – Corrected an issue where only the host was receiving meeting summaries, ensuring that all qualified participants receive notifications.
Address Missing Jira References in Salesforce Responses – Resolved a token-related issue that was causing only a partial set of Jira references to appear.
Reinforce Personalization of Workflow Reports – Adjustments have been made to ensure that reports appear only for the creator even for shared workflows.
Restore Salesforce Query Functionality for Assist – Updated query logic now prevents errors and ensures Salesforce data is returned correctly.
Automate Sanity Checks Post Deployment – Automated critical scans now follow commit deployments, ensuring that new changes are promptly verified.
Fix Google Calendar Search Chat Issue – The search+chat feature now reliably returns responses for calendar-related queries.

+41 other bug fixes

Ayraa Product Updates Feb 15 - Feb 28, 2025

Ayraa Product Updates Feb 15 - Feb 28, 2025


New Features
Ayraa Workflows- Powered by Deep Research- Pilot Project: Implemented the Prrof of concept for Agentic Framework to automate workspace analysis and document generation
Enhanced LLM Integration: Added Sonnet 3.7 support with thinking flag configuration for improved reasoning capabilities
Advanced Prompt Processing: Implemented ability to parse outputs from LLM prompt tool, enabling more sophisticated workflows
Planning Phase Framework: Conducted POC of planning phase with various simulated user inputs to improve execution plans quality
Specialized Prompt Templates: Created generate_jira_jql and launch_notes_planner prompt templates to enhance AI functionality

Enhancements
Search Optimization: Enhanced keyword search tools for more comprehensive results.
Results Filtering: Added parameter to control whether to display all results or only most relevant ones across applications.
Tools Registry Optimization: Restructured tools categorization to improve efficiency
Responsive Design Improvements: Enhanced UI for 13-inch screens and fixed Teams scrollbar and cursor issues.
Advanced SFDC Query: Improved Salesforce query functionality with better prompting and categorization.
Profile Localization: Updated profile page terminology to be more US-friendly by changing "Designation" to "Role" and removing gender preference options.
Jira Tools Enhancement: Updated Jira tools to include labels and sprint fields in the input schema.

+2 other minor enhancements

Bug Fixes
At-Ayraa Improvements: Fixed multiple issues including responses hanging due to Slack character limit, latency in request handling, and thread context confusion.
Collections Functionality: Resolved multiple collections issues including non-clickable citations, editing functionality, and rapid app switching.
Meetings Transcripts: Fixed issue where stopping recording during meetings didn't provide transcripts for recorded portions. PULSE-10828 • Performance Improvements: Investigated and fixed 99% CPU utilization on RDS, improving overall system performance.
API Responses: Fixed incomplete SF-Tool API responses and corrected issue with opportunities links.
Dashboard Analytics: Fixed search count functionality when using recent mode filter.
Error Handling: Resolved various errors including Hubspot queries with Anytime filter, Jira text search, and workflow responses.
UI Refinements: Fixed search grid view text overflow, Gmail reconnect button color, and left menu thickness issues.

+7 other miscellaneous fixes

Ayraa Product Updates Feb 01 - Feb 15, 2025

Ayraa Product Updates Feb 01 - Feb 15, 2025

New Features
• Redesigned Day 0 empty screen with improved header visuals and enhanced call-to-action styling, making first-time usage more intuitive.
• Introduced a new Bedrock prompt for AI summarization compatible with multiple LLM models, enhancing the quality of our summary capabilities.
• Added support for rich-text formatting in release note templates, ensuring instructions and labels appear in proper context.

Enhancements
• Enhanced 13-inch responsive layouts for the App Integration and Profile pages, providing better user experience on smaller screens.
• Updated sharing features for Meetings and Collections with improved Teams sharing and better Slack citation navigation.
• Improved latency on Production for Collections by prefetching APIs and reducing unnecessary overhead, resulting in faster load times.
• Streamlined load times for Recent Mode search and assist responses by introducing parallel prompt processing and optimizing queries.
+2 other improvements

Bug Fixes
• Fixed critical issue where Slack automation no longer displayed email IDs, restoring proper functionality.
• Resolved Gmail.com-based signup/login failures that were preventing users from accessing the platform.
• Addressed persistent "[sign-up] Invite needed to access Ayraa" error that appeared as users switched pages.
• Eliminated duplicate search results for collections links, providing cleaner search results.
• Fixed broken clickable cards and missing attachment previews in Collections.
+9 other miscellaneous fixes

Ayraa Product Updates Jan 15 - Jan 31, 2025

Ayraa Product Updates Jan 15 - Jan 31, 2025

Enhancements

• Implemented 'Recent Mode' for search and assist with turbocharged results from the last 90 days
• Fixed signup process for deleted tenant/account scenarios

+13 other minor enhancements

Bug Fixes

• Fixed issue where meeting transcripts were not appearing due to null pointer exception
• Resolved problem where certain queries were hanging in Search via web app
• Corrected scoring issues in Recent Mode that were affecting search result relevance
• Fixed issue where JIRA-related At-Ayraa queries were not providing expected responses

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to The Changelog.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.