top of page

Maple is an operators VC based in Tel-Aviv, focused on enterprise infrastructure and being the ideal all-round collaborator for technical founders right from the notebook phase

  • Ben Tytonovich

One of the key metaphors I’ve been using in recent years is equating the common software stack into a tree trunk and branches.


For most personas/departments, there is one main piece of software that dominates a significant part of their usage and budget. This is the trunk, also occasionally referred to as the departmental operating system/platform. Onto this trunk, there are many smaller software solutions which often integrate with it - these are, naturally, the branches.


Startups frequently start as branches and eventually evolve (in reverse order) into a trunk. It makes sense - startups usually don’t have enough resources to develop a trunk as their MVP (or even within several versions later).


Founders need to make sure that there’s a clear pathway allowing them to evolve from their branch into a trunk. It means several things:

  1. The features of their branch (aka first few product versions) can be organically and strategically leveraged by the trunk.

  2. They won’t be easily chopped off if the incumbent occupying the trunk position decides to stop granting access to certain APIs.

  3. There might be a different branch that is better strategically positioned to grow into that trunk, so alternative routes are important for consideration.

  4. The key benefits of their branch are a strategic achilles’ heel of the incumbent trunk.

  5. The persona and budget tied to the trunk are the same as for the branch (at least ideally).

  6. And many more.


Lastly, another key point is that founders often overrate the importance of the differentiation of their specific branch. Or in other words, the importance of their penetration point. While the branch/penetration point is definitely important - the eventual trunk is no less important. And if two startups start off from two different branches, but eventually aim for the same trunk - then they’re competitors, even if not at first. And they aim for the same TAM, the same opportunity.

For founders and investors trying to wrap their head around main focus points for 2024 in AI delivery, data infrastructure, devops and cyber - following are some thoughts & potential trends in a (relatively) quick digest.


AI Delivery

  1. Maturity in AI delivery will have to step up as organizations will tire from more and more LLM-based PoCs failing to provide meaningful business value. The criteria for experimentation will increase and the seriousness of success criteria will be more evident for every LLM project. More resources will be diverted into adopting 3rd party LLM infrastructure solutions as startups and incumbents continue to improve their offerings while in-house development faces the harsh reality that there are no silver bullets when it comes to software development, even when LLMs are involved.

  2. Specialized API-based LLM Consumption (or LLM-less, the serverless equivalent) will become more of a norm as AI vendors will provide use case specific optimized (functionally and operationally) LLM offerings in a serverless manner for vertical use cases. This will accelerate the inclusion of more LLM-based features in software products from companies who most of us don’t necessarily consider as early adopters of AI-first technologies.

  3. AI Replacing UI is a massive gradual trend which will start to take place in 2024 in a meaningful way, where more workflows which required an employee controlling a mouse and keyboard in the past, will now be replaced by either autonomous software and/or by chat-based interfaces. As with every software trend, this will start in semi-autonomous adoptions, augmenting employees at first. But as agent-like workflows mature, this will disrupt many work processes in almost every single organizational department.

  4. AI Synthesis, which is basically the act of leveraging multi-modal model functionality in order to fuse together insights based on various modalities (whether in format - video, text & audio or in type - language, actions & time), will start taking place in several commercial use cases in a meaningful way for the first time.

  5. RAG, fine tuning and whichever evolution these processes will go through in 2024 (but essentially - the concept of AI enterprise-specific augmentation) will continue to dominate in AI delivery as the need for enterprise-grade LLM-based solutions that are precise, consistent and truthful will become the end-goal for many.

  6. Privacy and security have been marked as concerns in the midst of the LLM explosion from (almost) day one. Yet, the toolset with regards to how to put in the right guardrails with the right approach has largely been missing in 2023. The understanding of privacy and security implications (in parallel to the evolution of regulation) will mature in 2024. Presumably, with a larger portion of LLM projects heading to 3rd party providers (see point #1), some of the burden will fall on the vendors’ shoulders, who will benefit from more horizontal experience while working with several customers which will benefit the rapid evolution of best practices.

  7. The continuous commoditization of large models is a natural process for which we started to see initial signs already in 2023 and one can only assume will continue to occur in 2024. This is the result of multiple factors, including open source propositions becoming more mature. Will we eventually see a dynamic in which 95% of large model related use cases will require commoditized (and smaller?) model versions and only 5% of use cases will necessitate specialized offerings?


Data Infrastructure

  1. The unstructured data phenomenon will continue to explode as organizations identify further opportunities to leverage video, free text, raw documents, audio and various other formats. Early adopters will move into more complex use cases and newcomers will start extracting the initial value hiding in internal corporate knowledge, customer engagements, financial documents, video calls and whatnot. Needless to say, this is also the backbone for the AI chatbot trend of the past year.

  2. The whole data-in-motion category will continue to spread into more use cases and in a broader range of companies. Growing demand for immediate insights will drive adoption of real-time analytics. Machine learning processes reliant on streaming data will also naturally conquer more ground and thus increase data-in-motion’s importance. While costly and complex to implement for now, hopefully new solutions will mature to ease real-time pipeline adoption by more organizations.

  3. LLM-enhanced data features will become the norm in several existing data product categories. The low hanging fruit candidates for this are data quality and integrity platforms but also ETL-related processes, data cataloging and some data automations where LLM assistance can move the needle.

  4. Cost-awareness with regards to the various organizational data pipelines will continue to increase as IT/devops/data engineering leaders are becoming more wary of budgets dedicated toward lakehouses, infrastructure observability platforms, SIEMs and other significant data products.

  5. BI complexity, which is the result of the explosion of BI in so many companies in the past few years, will drive several new and new-ish trends to try and decrease it as much as possible. The semantic issue, involving the lack of consistency in KPI definitions (which led to the race after the illusive semantic layer) will continue to trouble organizations. The explosion of reporting dashboards in many organizations, which becomes worse and worse with every year, will also become even more troublesome. Democratization of reporting, led by conversational BI (i.e. bot-assisted reporting) will hopefully decrease some of this burden, but unfortunately, probably not by much, at least in 2024.

  6. Standard software development practices in data operations (i.e. Data as Code), where data practitioners and processes adopt solutions and best practices helping them improve operational excellence, will have to continue to take place on multiple levels, otherwise major data ops issues (data pipeline breakage, etc.) will become even more rampant as data-based processes continue to dominate the modern organization.


Devops

  1. AI-led intelligent automations with an added emphasis on autonomous decision making will likely attract more attention and traction as solutions in the field will reach maturity levels they haven’t reached before. Devops processes are a natural candidate as a target for the big AI agent question as founders will try to figure out what exactly are AI agents and how can they support devops processes in a more autonomous way. Prime candidates for autonomous automation are processes that have already gone through initial automation, such as different workflows within the CI/CD pipeline and many IaC-centric automations.

  2. Self-sufficient engineering, and specifically platform engineering as the main trend under this umbrella, will continue to spread, through internal developer portals and other methods which will lower the barrier to entry for developers to control more devops processes themselves, without significant outside help, in order to bring about more agility into R&D workflows.

  3. Existing trends from the past few years like the adoption of multi-cloud/hybrid environments will naturally continue to attract more and more attention as the need to support complex environments as a fundamental approach will continue to take place. IaC and Gitops are obviously here to stay and to evolve, as is the continuous dominance of Kuberenetes (vis-a-vis making it more automated and simpler).

  4. Vertical cloud propositions will continue to gain further traction, whether in the sector-specific sense (healthcare, bio, etc.) or the technological-specific sense (ML-related cloud workloads) and will present one of the only avenues where founders can meaningfully try and disrupt niches within the (almost) impenetrable incumbent cloud ecosystem.


Cyber Security

  1. LLM-related security threats can be divided into two main categories, at least for now - GenAI at the service of various social engineering attacks is one such category and threats on LLM infrastructure and applications is the other category. Attackers will surely leverage LLMs to enhance phishing/vishing and other techniques with more credible, multi-modal and relevant content (while potentially also fusing in more personal information), making these attacks less distinguishable. Threats on the LLM infrastructure layer (e.g. training data stored in the data warehouse) or the application layer (e.g. model manipulation / injection) are also just a matter of time. Actual CISO prioritization for 2024 for this new generation of LLM-related attacks will naturally depend on the actual threats experienced, and not just the hypotheticals.

  2. Non-human identities will continue to expand their gap on human identities, trending toward the 1 to 50 ratio of human to non-human identities per organization. This proliferation, which is driven by numerous digital transformation related processes, most of which are detailed in other parts of this article (including IaC, LLMs, RPAs, platform engineering, microservices and whatnot) will continue to aggravate the inability of security professionals to get a true grasp of how widespread non-human identities’ presence truly is. On the human side of things, the passwordless trend will continue to pick up steam as organizations have more advantages in adopting it versus potential friction.

  3. Significant productivity gains for SOC teams led by the implementation of autonomous AI is a large question mark which it shares with other infrastructure operations departments (e.g. devops, data engineering, etc.). There are certainly several use cases in which SOC teams can leverage next-gen SOAR platforms to bridge the growing pain of hard-to-find skilled security professionals. MSPs/MSSPs are also a natural candidate to benefit from a potential autonomous software boost to their operations, as the cyber security professionals shortage will continue to increase their traction.

  4. Devsec’s integration into every facet of the software development lifecycle will also take another step into maturity, with further consolidation of devsec offerings, more AI-reliant solutions decreasing the friction of adding them into dev workflows and additional companies making continuous security testing and vulnerability scanning standard practices. This is inevitable with the growth of software supply chain based continuous rise and even the potential of more code-based vulnerabilities being introduced due to the exponential growth of auto code generation being only semi-supervised.

  • Ben Tytonovich

Adoptability = the willingness and ability of a customer to adopt a new tool.


You would think that the existence of a pain in a target industry and the proposition of relevant value from software vendors would equate to adoption. But that’s not necessarily the case. Which is counter intuitive to many founders but is still very true.


Oftentimes during the ideation/validation phase, we find out about a significant pain in the market. As founders, we start thinking there are no available solutions out there and that this is why the pain remains. We then do our research and find out there are in fact many solutions in existence. Why then, aren’t these solutions being adopted?


Many reasons, among them -

  1. Too many SaaS solutions already being adopted

  2. Heavy data migrations needed for initial value

  3. Lots of integrations needed for initial value

  4. Lack of tech proficiency from potential users

  5. Nice-to-have but not critical enough to exhort energy on by the implementor

  6. Cumbersome solutions


The SaaS phenomenon started as a solution for better adoptability, among other motivators. But it is evident in the past 3-4 years that buyers are fed up with more and more SaaS solutions being offered in new/existing categories.


The no/low-code phenomenon was supposed to improve adoptability by decreasing the barrier to entry from the implementation/configuration angle. It only worked in several verticals.


One of the biggest attractions behind the (still mostly hypothetical) autonomous AI agents wave (a very ambiguous term) is their potential ability to bridge many adoptability-related gaps. An intelligent software component that can avoid many of the above obstacles -

  1. No need to open a new tab in the browser (aka SaaS saturation)

  2. Ability for self-configuration (aka no need for low/no-code proficiency)

  3. Ability to independently create new software integrations

  4. Flexibility in new data format digesting

  5. Autonomous operations without complex user adoption needed


If (and it’s a very big if) AI agents would indeed represent a viable autonomous product approach and if founders will find a niche where these agents can provide value already in the next 2 years, their ease of adoption will be a key factor in their ultimate success, managing to deliver value where there is either SaaS saturation or to new groups of users that weren’t tech proficient enough to adopt no/low code based solutions beforehand (on top of other competitive advantages).

bottom of page