
*Please note, this article was originally published on Medium here
It’s been a while since I last sat down to write this blog. What I used to write was pretty technical, down-to-earth, and closely connected to machine learning problems. But in recent years, most of my work, while still technology-related, has been much more focused on the transformations within organizations driven by AI technology.
While building Neurons Lab, I led multiple AI transformation projects, from enabling state banks the use of tools like ChatGPT and Midjourney to building extremely custom AI solutions for enterprises and fine-tuning LLMs for top startups. I’ve come to realize that the hardest part of building real-world AI solutions isn’t in the research, AI operations, or infrastructure, but in business viability.
The topic I want to focus on today is making money from AI and building a strong business case. Most organizations work with a top-down approach, and if the unit economics, ROI, and business case are right, the transformation is much more likely to happen. After hundreds of projects, I see relatively simple mathematical patterns of enterprise AI unit economics and growth that I’d like to share with you today.
The goal of this article is to show you typical approaches enterprises are taking to integrate AI internally, and what you can learn from them. That way, you can take appropriate risks and manage your expectations properly.
Sublinear / logarithmic

The standard approach in most companies involves buying corporate subscriptions to Google, Microsoft, AWS, and various AI products. These are then deployed across different teams — sales, marketing, operations, software engineering, etc.
Initially, this works great. You see real productivity gains, especially if you train your teams well. However, with each additional user, breakthroughs become less frequent. The issue is the fixed “brain” capacity model. Each new person using the same tool shares the same “brain,” the same prompts, the same resources. Improvement becomes sublinear because you can’t expect everyone to consistently improve prompts or the tools themselves. In reality, maybe one in ten will offer a new suggestion or a smarter way to use the tool. That’s just how organizations work.
Because of this, you shouldn’t expect exponential or even linear growth from this approach. You can achieve quick and cheap gains at the start, especially when your top performers begin using the tools. But the more you scale, the more sub-linear the results become.
Fast and cheap initial results with out-of-the box AI assistants is a main benefit of this model.
So, what’s the wrong scenario here? It’s over-investing in applications that scale sub-linearly. You usually don’t need a custom GraphRAG to answer simple policy questions from employees, because not everyone will use it all the time. You likely don’t need a custom AI agent uploading PDFs and analyzing resumes; a standard chatbot subscription will suffice. And you probably don’t need a custom deep research agent when you can just use ChatGPT, Claude, Perplexity, Gemini, or any other readily available tool.
Custom solutions will make your initial start more expensive and slower, with an ROI that will still flatten out in the end, for the reasons I’ve outlined.
Linear

To escape the trap of sub-linear scaling, ensure that every new user and every new application of AI yields at least the same level of improvement for your business. In my experience, this happens when you shift from automating simple occasional tasks with AI assistants to focusing on the whole professions, AORs, and business processes.
Consider document processing, such as claims processing in insurance. You can quantify the time and expense required to process a single claim. When you introduce AI, the improvement delta will be consistent. Whether you have one person or a thousand working on similar documents with roughly the same performance, the delta between AI and human performance remains constant. This allows you to scale linearly from one to a thousand people.
The same principle applies to customer support agents handling diverse inbound requests. The typical price delta between human and AI responses remains consistent, regardless of whether you receive 100 or a million requests per day.
In linear models, the most critical factor is planning for volume and ROI. The initial investment requires you to bring your own data and work more carefully with prompts and generative AI architectures. You’re not just addressing a single task but a wide range of tasks or even an entire profession.
The real benefits come from high volume, not from analyzing one document or handling five requests per week. For low volumes, you can use ChatGPT or Claude with effective prompts and MCPs, which will be cheap and fast. However, for large-scale operations involving thousands or millions of documents or customer support requests, ChatGPT won’t be enough. Investing in a custom solution highly likely will be more effective.
But be realistic in your expectations. Document processing and customer support are traditionally metrics- and KPI-driven processes. When replicating these processes with AI, you’re constrained by performance expectations. The delta between AI and human performance will be steady, so don’t expect magical, exponential returns. Unlike breakthrough innovations, documents processing and customer support require reliability. Which is completely fine.
Exponential

Linear AI applications are efficient, but they can also be relatively unimaginative. The whole idea is to codify human processes as much as possible so we can hire fewer people and save costs. It’s a good strategy, but it likely won’t give you exponential growth. You can still achieve exponential growth with AI using a couple of patterns.
Betting on technology for exponential growth
The first is the well-known technological pattern of data effects and network effects. The problem with sublinear and linear cases is that you work in a very fixed environment where data is siloed and connected to a person or a process. There’s typically little exchange between different streams of work because a single AI assistant works for a single person, and a single document processing automation works for a single set of AORs.
Data effects can happen when you consolidate previously siloed data sources, creating opportunities for upsells and cross-sells. For example, this happens a lot in banking, where you can merge data from retail banking, investment banking, corporate banking, and wealth management departments to identify new opportunities.
Next, there are network effects, where you can distribute feedback through the system. To improve performance, improvements will come from real-world users giving thumbs up or thumbs down, rather than from a developer tweaking prompts in a chatbot or tweaking document processing automation for a department. This makes improvements to the model on an exponential scale rather than a linear or sub-linear one.
Betting on humans for exponential growth
Achieving exponential improvements isn’t just technological — you can get it from unlocking more of humanness as well. In the linear case, you essentially want to get rid of people involved in simple, repetitive processes. But you can also unlock people from bureaucracy and repetitive processes to create more business.
A good example is in wealth management and private banking, where relationship managers deal a lot with bureaucracy, follow-ups, emails, research, and asking analysts instead of meeting real clients and building real relationships, which creates new business nonlinearly. Emotional connection is harder to codify with processes compared to customer support and document processing.
That’s why another pattern we see to get exponential returns from AI is to be creative and split some roles and professions into new, money-generating roles in an exponential and creative way, relying on humans there completely, and into something that can be automated away, almost like in linear cases.
Cheat sheet

- Want fast and cheap productivity boost? Give your team out-of-the-box AI copilots and teach how to use them properly. Don’t reinvent the wheel and build something custom unless you absolutely have to.
- Want to handle large volume of tasks reliably? Build custom workflows that mimic very closely human professions. Careful metrics measurement and agent design will be a key here. Highly likely you don’t need a custom LLM here. Often you can reuse commercial accelerators.
- Want to go beyond and create new business opportunities? Break data silos and create data effects for upsells and cross-sells. Create network effects via shared model that is updated from real-world feedback. Free up humans for emotional intelligence jobs that AI don’t do (yet). No limits here, you even might need your own LLM.
What’s next?
I don’t want to tell you that one model is better than another, or that you shouldn’t use out-of-the-box AI tools just because the returns will be sub-linear. Quite the opposite. Personally, within my company, and with the clients I encourage everyone to use these tools all the time. It’s the lowest-risk investment and the fastest route to initial productivity gains.
But I’m doing this with the clear understanding that the curve flattens. If I want major boosts for myself, my company, or my clients, I have to invest more in data, models, and infrastructure complexity, which, of course, is higher risk, but also promises higher returns on scale.
I want to offer this structure as a neutral map to help you with decision-making, rather than pushing you to choose one specific approach over another. Reach out to me if you need additional guidance or help in transforming your organization.
Thank you for reading!