Validating a low-code approach to real-time AI and data transformation / Fleak

Scope
- Product Discovery
- Market Research
- Interviews
- UX/UI Design
- User testing
- Branding
Real-time AI is becoming essential, yet deploying it within modern data infrastructures remains complex and costly. Existing streaming tools are powerful but hard to use, requiring deep engineering expertise, painful infrastructure setup, and offering little visibility during data transformations.
Fleak addresses this bottleneck by making real-time data transformation and AI serving faster, more transparent, and easier to operate through a low-code platform and real-time transformation engine.
The challenge: defining a product direction that would reduce friction for users to adopt
As part of Sudolabs (software agency I worked with) me and the PM/tech lead, together with an engineering team were tasked with multiple challenges:
- Research and understand trends and opportunity spaces across the ML, AI, and data landscape
- Conduct user interviews to uncover real, high-impact problems
- Reconcile the founders’ deep understanding of enterprise data pipeline complexity with their initial product vision - a Retool-like, node-based builder that early prospects (data engineers) found hard to navigate
- Translate research insights into clear hypotheses and potential value propositions
- Conceptualise, scope, and validate a simplified, linear transformation workflow that reduced cognitive and technical overhead
- Launch the MVP on ProductHunt, gather insights and move from there
The core task was to move from a technically impressive, yet hard to navigate solution to a product direction that users could understand, adopt, and trust early.

Product discovery: the low-code dilemma
Multi-week product discovery revealed significant technical friction and clear gaps in the big data and AI infrastructure space. Interviews with ~50 SMEs surfaced consistent pain points:
- Steep learning curve: Streaming tools like Apache Flink require deep engineering expertise and are hard to use.;
- Infrastructure complexity: Cloud setup, especially on AWS, is slow, costly, and painful.;
- Lack of visibility: Limited real-time insight into data transformations turns debugging into a black box.
While teams want speed and simplicity but engineers distrust restrictive low-code tools, Fleak identified an opportunity for a low-code that generates code approach, leading to a linear, node-based transformation platform that empowers data analysts, reduces infrastructure overhead and cost, and enables faster experimentation and deployment without large engineering teams.


From reinvention to simplification - shaping the value proposition
Our value proposition evolved over the early months of the project as we deepened our understanding of the problem and the market. Initially, we aimed to center the product around building data pipelines for real-time streaming, leveraging the growing Apache Flink technology and community. However, this concept proved too novel for the audience, as the immediate need was found elsewhere, leading us to work with a concept of workflows—positioning Fleak as an AI and Data workflow orchestrator.
Early concepts
We quickly explored early concepts to decide which direction to go. Node based UI, even though popular did surprisingly not resonate with the early audience.


Designing the product around the mental model of IDEs
We chose a simpler linear workflow model, with each node having a single input and output, and aligned debugging with familiar IDE mental models. This led to a right-side debugging panel as a key interaction for validating JSON data transformations.




Product-led validation and a Product Hunt success that meant little
In the early stages, we focused on the core activation journey—from sign-up to the first AHA moment—using early engagement as the main signal of Product-Market Fit. With limited resources, daily API calls became the key metric. Acquisition combined founder-led warm outreach with a bottom-up, product-led approach suited to data analysts and engineers, while monetization was defined with support from a fractional CMO.
Self-serve onboarding
To lessen the friction of onboarding the self-serve users, we introduced a couple of key components:
- Ready made templates (including ungated) with sample use cases for users to speed up the time to value
- Documentation as a baseline for both acqusition and deeper education
- Tutorial videos and contextual tooltips

The hard truth and the lucky strike
A private alpha with a small group of testers was followed by a public Product Hunt launch, where Fleak ranked #3 Product of the Day and #1 SaaS Product of the Week. While this delivered visibility and early leads, user feedback and data showed that a low-code interface wasn't the high value signal needed for the team to justify continuing on the product. The turning point came when an enterprise customer recognized the value of the underlying engine, leading to Fleak's first deal and a clear insight: for the early stage, the product needed to be an SDK.
Thank you for your time!
Credits
Team Fleak, including Yichen Yin and Bo Lei; and team Sudolabs, including Peter Papp and Sebastian Andil.
More case studies

Evolving a cross-platform security product, reaching a 20 million strong userbase / Avast Secure Browser
Security · Cross-platform

Scaling a designer consultation platform with an e-commerce marketplace / The Expert
E-commerce · CMS

Finding traction for an AI web builder / Scene
AI web builder · Sales automation