Hashtag Jakarta EE #319

Hashtag Jakarta EE #319

Welcome to issue number three hundred and nineteen of Hashtag Jakarta EE!

As I am writing this, I am sitting in my hotel in Johannesburg, South Africa. Me being here is actually as success story. I was supposed to present Jakarta EE at I Code Java on Tuesday and Wednesday. I have spoken at the conference twice before. Once in Cape Town and once here in Johannesburg. But that was back in 2018 and 2019. A couple of weeks ago the speakers got notified that the conference was cancelled. With flights and accommodations already booked, plans made, Phillip, Buhake and I scrambled and created a substitute event. With funds from the Eclipse Foundation concept of Open Community Meetup and the organisation of Jozi-JUG, we created JakartaOne by Jozi-JUG where Phillip and I will be presenting. The event has 136 registered attendees as of today.

It looks like the release of Jakarta EE 12 may be rescheduled to be in Q4 rather than Q1 this year. Most of the vendors are working on their Jakarta EE 11 implementations, and only a couple of specifications are in a good state for Jakarta EE 12. One of them is Jakarta Persistence 4.0 that is already implemented by a alpha release of Hibernate 8.

GlassFish 8.0.0 was released earlier this week, which means that the GlassFish project, lead by the wonderful folks at OmniFish, will be able start focusing on the Jakarta EE 12 implementation.

I also want to remind you about Open Community eXperience 2026 in Brussels on April 21-23. Registration is open. Make sure to secure you spot now, and show up to my talk.

Ivar Grimstad


📚 StudyStream: Your AI Learning Companion That Actually Gets You!

This is a submission for the Algolia Agent Studio Challenge: Consumer-Facing Non-Conversational Experiences

Hello, Lifelong Learners! 🌟

Let me tell you a story. Last month, I was trying to learn TypeScript (ironic, right?). I had 47 browser tabs open, three different courses bookmarked, and absolutely NO idea where I left off. Sound familiar?

That frustration led me to build StudyStream – a learning companion that actually remembers where you are in your journey! 🚀

studystream-ten.vercel.app

💡 So What’s StudyStream All About?

Think of it as your personal study buddy who:

  • 📝 Knows exactly what you’re learning
  • 🎯 Suggests what to study next
  • 🏆 Celebrates your wins (with actual confetti!)
  • 📊 Tracks your progress so you don’t have to

It’s NOT another boring e-learning platform. It’s designed to make studying feel like a game you actually want to play!

GitHub logo

aniruddhaadak80
/
studystream

📚 StudyStream – AI-Powered Learning Assistant

Next.js
TypeScript
Tailwind
Algolia

Master programming with AI-powered proactive learning! StudyStream is a non-conversational AI assistant that proactively suggests what to learn next based on your progress and context.

✨ Features

🧠 Proactive AI Learning

  • Context-Aware Suggestions – AI recommends topics based on what you’re studying
  • Smart Quiz Selection – Questions matched to your current skill level
  • Adaptive Difficulty – Content adjusts to your performance

🎮 Gamification

  • Progress Tracking – Track completion across all topics
  • Achievement Badges – Unlock badges for milestones
  • Streak Counter – Build daily learning habits
  • XP System – Earn points for completing quizzes

📖 Rich Content

  • 10 Study Topics across JavaScript, Python, React, TypeScript, CSS
  • 30+ Practice Questions with explanations
  • Code Examples with syntax highlighting
  • Key Terms for each section

🎨 Beautiful UI

  • Focus Mode Design – Distraction-free learning environment
  • Dark Theme – Easy on the eyes for long study sessions
  • Smooth Animations -…
View on GitHub

✨ Features That’ll Make You Go “Ooh!”

🔍 Smart Search That Reads Your Mind

Type “JavaScript closures” or “how to center a div” (we’ve all been there 😂) and get instant, relevant content.

Image deription

📈 Progress Tracking

Visual progress bars, streaks, and statistics. Because seeing how far you’ve come is incredibly motivating!

Imagescription

🎮 Gamification Done Right

  • XP System: Earn points for completing topics
  • Streak Counter: Keep that fire burning! 🔥
  • Achievement Badges: Collect ’em all
  • Confetti Explosions: Because you deserve to celebrate!

Image ription

Image deiption

💭 AI-Powered Suggestions

Based on what you’re learning, StudyStream suggests related topics. Learning React? Here’s some TypeScript to go with that!

📝 Interactive Quizzes

Test your knowledge with practice questions. Immediate feedback helps you learn faster!

Image dription

🌙 Gorgeous Dark Mode

Easy on the eyes during those late-night study sessions.

🛠️ Under the Hood (Tech Stack)

Here’s what’s powering this learning machine:

Technology Purpose
Next.js 16 The backbone – SSR, app router, everything!
TypeScript Type safety = fewer bugs = happy developer
Algolia Blazing-fast search across all content
Framer Motion Those satisfying animations
Tailwind CSS Styling at the speed of thought

The Algolia Integration 🔮

This is where the non-conversational AI magic happens. Algolia handles:

import { algoliasearch } from 'algoliasearch';

// Search across topics and questions
export async function searchTopics(query: string, filters?: SearchFilters) {
  const results = await searchClient.searchSingleIndex({
    indexName: 'study_topics',
    searchParams: {
      query,
      filters: filterString,
      hitsPerPage: 20,
    }
  });

  return results.hits as StudyTopicRecord[];
}

Why Non-Conversational AI? 🤔

Unlike chatbots, StudyStream uses AI in the background. It’s:

  • Analyzing content to suggest related topics
  • Predicting difficulty based on your progress
  • Optimizing search to surface the most relevant content

You don’t see it, but it’s always working for you!

📚 What Can You Learn?

Currently featuring topics in:

  • JavaScript – From basics to async/await
  • Python – Data structures, algorithms, and more
  • React – Components, hooks, and best practices
  • TypeScript – Types, interfaces, generics
  • CSS – Flexbox, Grid, and modern layouts

And I’m constantly adding more!

🎯 The Learning Experience

Here’s how a typical session looks:

  1. Pick a topic that interests you
  2. Read through the beautifully formatted content
  3. Take a quiz to test understanding
  4. Earn XP and watch your progress grow
  5. Get suggestions for what to learn next
  6. Repeat and keep that streak alive! 🔥

🚀 Impact & Learnings

Building StudyStream was itself a learning experience! I discovered:

  • Gamification psychology: Small rewards create big motivation
  • Content structure: How to organize information for learning
  • Algolia’s power: Not just for e-commerce – perfect for educational content!
  • Progressive enhancement: Works without JavaScript, amazing with it

🔮 Future Plans

This is just the beginning! Coming soon:

  • [ ] More programming languages (Rust, Go, etc.)
  • [ ] Spaced repetition algorithm
  • [ ] Social features – study with friends!
  • [ ] Mobile app version
  • [ ] AI-generated practice problems

🎉 Try It Yourself!

I’d love for you to take StudyStream for a spin! Pick a topic, complete a quiz, and let me know how it feels.

studystream-ten.vercel.app

Your feedback means the world to me! ⭐

Built with 💜 for the Algolia Agent Studio Challenge

*P.S. – Complete 5 quizzes correctly and you’ll unlock a special achievement. What is it?

You’ll have to find out! 🏆*

I Built a Python CLI Tool for RAG Over Any Document Folder

A zero-config command-line tool for retrieval-augmented generation — index a folder, ask questions, get cited answers. Works locally with Ollama or with cloud APIs.

Every time I wanted to ask questions about a set of documents, I’d write the same 100 lines of boilerplate: load docs, chunk them, embed them, store in a vector DB, retrieve, generate. I got tired of it. So I built a CLI tool that does it in two commands.

The Problem

RAG prototyping has too much ceremony. You have a folder of PDFs, Markdown files, maybe some text notes. You want to ask questions about them. Simple enough in theory.

In practice, you’re wiring up document loaders, picking a chunking strategy, initializing an embedding provider, setting up a vector store, writing retrieval logic, and then finally getting to the part you actually care about: generating an answer. And you do this every single time you start a new project or want to test a new document set.

Existing solutions sit at the extremes. Full frameworks like LangChain and LlamaIndex are powerful, but they’re heavy. You pull in a framework with dozens of abstractions just to ask a question about a folder. On the other end, tutorial notebooks are disposable. They work once, for one demo, and you throw them away.

I wanted something in the middle. A CLI that’s zero-config for the common case, configurable when you need it, and built from pieces I can reuse in other projects. No framework dependencies. No notebook rot. Just a tool that does one thing well.

What I Built

rag-cli-tool gives you two commands:

rag-cli index ./my-docs/
rag-cli ask "What is the refund policy?"

That’s it. Point it at a folder, it indexes everything. Ask a question, it answers from your documents. Supported formats include PDF, Markdown, plain text, and DOCX.

Under the hood, the pipeline is straightforward. index loads documents from the directory, splits them into overlapping chunks using a recursive text splitter, generates embeddings, and stores everything in a local ChromaDB instance. ask embeds your question, retrieves the most similar chunks, and generates an answer using only the retrieved context — strict RAG, no hallucination from external knowledge.

The tech stack is deliberately boring. ChromaDB for the vector store because it runs locally with zero setup — no Docker, no server, just a directory. Typer for the CLI framework because it gives you type-checked arguments and auto-generated help for free. Rich for terminal output because progress bars and formatted answers make the tool pleasant to use. Pydantic Settings for configuration because environment variables and .env files are the right answer for CLI tools.

You can run it fully local with Ollama (no API keys needed) or use cloud providers:

# Local -- no API keys
RAG_CLI_MODEL=ollama:llama3.2 RAG_CLI_EMBEDDING_MODEL=ollama:nomic-embed-text 
  rag-cli ask "What are the payment terms?"

# Cloud -- Anthropic + OpenAI
export ANTHROPIC_API_KEY=sk-ant-...
export OPENAI_API_KEY=sk-...
rag-cli ask "What are the payment terms?"

Architecture — Built for Reuse

This is where rag-cli-tool diverges from a typical weekend project. The repository contains three independent packages, not one monolith:

src/
├── rag_cli/       # CLI interface (Typer + Rich)
├── llm_core/      # LLM abstraction layer (providers, config, retry)
└── rag_core/      # RAG pipeline (loaders, chunking, embeddings, retrieval)

llm_core handles everything related to calling language models. It defines a provider interface, implements Anthropic and Ollama adapters, and includes retry logic with exponential backoff. It knows nothing about RAG, documents, or CLI output.

rag_core handles the RAG pipeline: loading documents, chunking text, generating embeddings, storing vectors, and retrieving results. It depends on llm_core for embedding providers but has no opinion about how you present results to users.

rag_cli is the thin layer that wires everything together. It handles argument parsing, progress bars, and formatted output. The actual logic is a few lines of glue code.

The reason for this separation is practical, not academic. I build AI projects regularly. The next one might be a web app, a Slack bot, or an API service. When that happens, I don’t want to extract RAG logic from a CLI tool. I want to import rag_core and start building. Same for llm_core — provider switching, retry logic, and configuration management are problems I solve once.

Every major component has an abstract base class. BaseLLMProvider, BaseEmbedder, BaseChunker, BaseRetriever, BaseVectorStore. Today I have one implementation of each. Tomorrow I can add a GraphRAG retriever or a Pinecone vector store without touching existing code. The abstractions aren’t speculative — they’re the minimum interface each component needs to be swappable.

The project has full test coverage across all three packages — 37 tests covering providers, configuration, chunking, embeddings, retrieval, and vector store operations.

Design Decisions

Four decisions shaped the project, each with a specific reason:

ChromaDB over FAISS or Pinecone. FAISS requires numpy gymnastics for persistence and doesn’t store metadata natively. Pinecone requires an account and network access. ChromaDB gives you a local, persistent vector store with metadata filtering in one line: ChromaStore(persist_dir=path). For a CLI tool that should work offline, this was the only real choice.

Typer over Click. Click is battle-tested, but Typer gives you type annotations as your argument definitions. No decorators for each option, no callback functions. You write a normal Python function with type hints, and Typer generates the CLI. The help text writes itself.

Pydantic Settings for configuration. CLI tools need to read config from environment variables and .env files. Pydantic Settings does both, with validation, default values, and type coercion. One class definition replaces a dozen os.getenv() calls with fallback logic.

Provider routing via model string prefix. Instead of separate config fields for provider selection, the model string does double duty: claude-3-5-sonnet-latest routes to Anthropic, ollama:llama3.2 routes to Ollama. One config field, zero ambiguity. This pattern scales to any number of providers without config proliferation.

What I Learned

The 80/20 of RAG tooling surprised me. I expected the infrastructure — vector stores, embedding APIs, retrieval logic — to consume most of the development time. Instead, chunking decisions dominated. How big should chunks be? How much overlap? Which separators produce coherent boundaries? The pipeline code was straightforward; the tuning was where the real work happened.

CLI-first development forces good API design. When your first consumer is a command-line interface, you can’t hide behind web framework magic. Every input is explicit, every output is visible. This discipline produced cleaner interfaces in llm_core and rag_core than I would have gotten starting with a web app.

I intentionally shipped without several features: chat mode with conversation history, benchmarking against different chunking strategies, a web UI, and support for more vector stores. These are all reasonable features. They’re also scope creep for a v0.1. The foundation is solid, the abstractions are in place, and each of those features is an afternoon of work because the architecture supports extension.

Try It

The best developer tools solve your own problems first. rag-cli-tool started as “I’m tired of writing this boilerplate” and turned into reusable building blocks for my entire AI project portfolio. If you work with documents and want a fast way to prototype RAG pipelines, give it a try.

# Install from PyPI
pip install rag-cli-tool

# Or from source
git clone https://github.com/LukaszGrochal/rag-cli-tool
cd rag-cli-tool
pip install -e .

# With Ollama (free, local)
ollama pull llama3.2 && ollama pull nomic-embed-text
rag-cli index ./sample-docs/
rag-cli ask "What is the refund policy?"

PyPI: https://pypi.org/project/rag-cli-tool/
GitHub: https://github.com/LukaszGrochal/rag-cli-tool

Tags: python, cli, rag, ai, developer-tools

Sofia Core – Open Source AI Infrastructure with DNA Computing

What My Project Does

Sofia Core is open-source AI infrastructure that brings biological computing paradigms to production systems. It implements:

  • DNA Computing: Biologically-inspired algorithms achieving massive parallelism (10^15 operations)
  • Swarm Intelligence: Coordinate 1,000+ AI agents simultaneously for collective problem-solving
  • Temporal Reasoning: Time-aware predictions with causal inference

Built entirely in Python with production-ready infrastructure (FastAPI, PostgreSQL, Redis, 70%+ test coverage).

Target Audience

Production use: Yes – production-ready with real LLM integration (OpenAI, Anthropic), auth, caching, Docker/K8s support.

Who it’s for:

  • Python developers building AI applications
  • ML engineers exploring distributed intelligence
  • Researchers interested in biological computing
  • Teams needing scalable multi-agent systems

Not just a toy: 50,000+ lines of code, comprehensive tests, published research paper with benchmarks.

Comparison

vs. LangChain/LlamaIndex: Sofia Core focuses on infrastructure (compute primitives, agent coordination, temporal logic) rather than high-level chains. More similar to Ray or Celery but optimized for AI workloads.

vs. Ray: Ray does distributed computing; Sofia Core adds biological computing paradigms (DNA algorithms, swarm coordination) specifically for AI. Complementary rather than competitive.

vs. Custom solutions: Provides 300× speedups in parallel tasks (benchmarked), built-in swarm coordination, and temporal reasoning out of the box. MIT licensed with no vendor lock-in.

Unique: First open-source implementation of DNA computing + swarm intelligence + temporal reasoning in a unified production framework.

Technical Stack

🐍 Modern Python:

  • Python 3.11+
  • FastAPI for high-performance APIs
  • SQLAlchemy 2.0 with async support
  • Pydantic v2 for validation
  • Poetry for dependency management

🔧 Production-ready:

  • PostgreSQL + Redis
  • Docker + Docker Compose
  • 70%+ test coverage (pytest)
  • Complete type hints
  • Async/await throughout

Quick Start

git clone https://github.com/emeraldorbit/sofia-core-backend
cd sofia-core-backend
./quick-start.sh

Works in 5 minutes!

Code Example

from sofia_sdk import SofiaClient

client = SofiaClient()

# DNA computing for parallel search
result = client.dna_compute(
    sequence="ATCGATCG",
    computation_type="parallel_search"
)
print(f"Parallel ops: {result['parallel_operations']}")

# Swarm intelligence
swarm = client.create_swarm(
    num_agents=1000,
    coordination_strategy="consensus"
)

Resources

  • GitHub: https://github.com/emeraldorbit/sofia-core-backend
  • Research paper: 8,000 words with rigorous benchmarks (in repo)
  • API docs: Complete FastAPI Swagger documentation
  • License: MIT

Built over 20+ hours. Happy to answer questions about the Python implementation, architecture decisions, or biological computing approach!

SocialFi: How Farcaster and Lens Are Redesigning Social on Web3

From rented timelines to user‑owned social graphs and programmable feeds.

This is Day 41 of my 60‑Day Web3 journey. Over the past few weeks we have gone from core infrastructure topics like Layer 0 and Layer 3, to smart contract upgradeability, to zero‑knowledge proofs, to storage with IPFS, and most recently to Bitcoin Layer 2s. Today we are leaving pure infra for a moment and stepping into something users actually touch every day: social. Not “Web2 but token,” but new social networks like Farcaster and Lens that try to rebuild Twitter and Instagram from the ground up as protocols instead of platforms.

If you want to follow the series as it unfolds, you can find the archive on Medium and on Future. The Telegram community is still the best place to discuss each day’s post and share your own experiments, you will find us at Web3ForHumans on Telegram.

Why Social Needs a “Fi” in the First Place

Before we talk about SocialFi, it is worth asking why we are unhappy with the current social stack at all. In Web2, social platforms own three important things: your identity on that network, your social graph, and your reach. Your handle lives in their database. Your followers list is their asset, not yours. The algorithm they tune decides who sees what, and it can change overnight with no way to “fork your audience” if you disagree.

SocialFi and decentralized social try to flip this. Instead of thinking in terms of a single company’s app, they treat social as a protocol made of identity, a social graph, and content that any front‑end can build on. The economic layer is not only about speculative tokens. It is also about aligning incentives so that users, creators, and developers share more of the value that network effects create, instead of handing almost all of it to a single platform.

In 2026, conversations about SocialFi often focus on casino‑style airdrops and “engage‑to‑earn” mechanics, but there is a deeper story underneath. Projects like Farcaster and Lens are trying to build an open social graph, where your profile and your connections are portable, and where developers can create new clients and experiences without asking anyone for API access. Articles like “The Battle for Web3’s Social Graph: Why Farcaster and Lens Are Leading” frame this as a multi‑billion‑dollar race to own the “social layer” of Web3.

Farcaster: Social As a Protocol First, App Second

Farcaster is one of the most cited examples when people talk about “Web3 native social” in 2026. At a high level, it is a protocol for a social graph and short‑form posts (called “casts”), with a growing ecosystem of clients built on top. It started with Warpcast as the main client, but the core idea is that any developer can build their own Farcaster app as long as they respect the protocol rules.

Under the hood, Farcaster separates concerns that Web2 bundles together. Identity is tied to Farcaster IDs and usernames that are independent of any single UI. The social graph (who follows whom, who reacts to what) is stored in an open data layer instead of inside a proprietary database. Developers can build custom clients, bots, and indexing services that watch this public activity and offer new ways to interact with it. Some analyses, such as this Farcaster SocialFi overview, describe it as an “open, programmable social layer” instead of just a Twitter clone.

Over time, Farcaster has moved more and more logic on‑chain or into clearly specified data availability layers. There has been a gradual shift toward using on‑chain data for things like channels, reactions, and other social primitives, which means the protocol itself is becoming less dependent on any one operator. News items like reports that Farcaster is “ditching a centralized social graph and embracing on‑chain state” capture this evolution and show how serious the team is about decentralization. For builders, this means you can count on the social data layer existing beyond any single company.

Lens: User‑Owned Social Graph With Smart Contracts

Lens takes a slightly different approach to the same core problem. Instead of building a brand‑new identity system, it leans heavily into smart contracts and NFTs. At its core, Lens treats your profile as an NFT, your relationships as on‑chain connections, and your posts and collects as interactions recorded in a protocol that any client can read. The original version launched on Polygon, and more recently the team has been working on Lens Chain, a dedicated L2 environment focused on scaling social.

From a developer perspective, Lens is interesting because it is very explicit about modeling social relationships onchain. A profile is an NFT, following someone can involve minting follow NFTs, and collecting content can be tokenized. As “Lens Chain Goes Live: Scaling SocialFi with Avail and zkSync” explains, the infrastructure has evolved from a single chain deployment to a dedicated setup that uses data availability solutions and zk‑based tech to make social interactions scalable and cheap enough for mainstream use.

In the context of SocialFi, Lens tends to attract projects that are comfortable with onchain economics. It is natural to build things like creator passes, subscription models, or token‑gated communities when the underlying protocol already treats the social graph as a set of composable smart contracts. At the same time, this raises security and UX questions that are different from Farcaster’s approach, where not every interaction is directly an onchain transaction on a shared smart contract platform.

What Makes SocialFi Different From Web2 Social Plus Tokens

It is easy to dismiss SocialFi as “just another Ponzi narrative with extra steps,” especially if your first exposure to it was some farm‑and‑dump engagement campaign. To filter the noise out, it helps to focus on a few structural differences that serious deep dives, such as this SocialFi explain‑all from Chainup, highlight repeatedly.

First, identity and the social graph are intended to be portable. If a particular client changes its rules, you can in theory switch to another one without losing your followers or content. This is a huge shift from the Web2 world, where leaving a platform usually means abandoning your audience. Second, data is primarily public and indexable. Developers can build analytics, discovery, and alternative feeds by reading from the same underlying protocol, instead of being rate‑limited or banned by a single company’s API policy. Third, economic incentives are exposed at the protocol level. Creators can receive value directly via tips, mints, or onchain actions that are not entirely controlled by a centralized ad system.

Of course, there are tradeoffs. Public social graphs raise privacy questions. Token mechanics can create perverse incentives where people chase airdrops more than meaningful conversation. Infrastructure choices (which chain, what data availability layer, which wallets) still create friction for mainstream users. SocialFi is not a solved problem. It is a set of experiments at the intersection of protocols, UX, and economics, with Farcaster and Lens as two of the clearest current examples.

Farcaster vs Lens: Same Goal, Different Paths

A lot of 2026 coverage frames Farcaster and Lens as rivals in a “2.4B dollar battle for Web3’s social graph,” as in this comparison from BlockEden. It is a catchy narrative, but as a learner it is more useful to see how they complement each other in terms of design space.

Farcaster leans into a protocol‑first identity and data model, with a growing onchain component for state, but not everything being a classic smart contract interaction. It optimizes for a Twitter‑like experience and developer flexibility around indexing, bots, and custom clients. Lens leans more heavily into NFTs and explicit smart contracts, which makes it natural to build tokenized relationships, collectibles, and financialized social experiences. It optimizes for composability with existing EVM tooling and onchain DeFi and NFT primitives.

Both are exploring how much of the social graph should live directly onchain versus in dedicated data layers, how to handle moderation in an open protocol world, and how to avoid purely speculative behavior dominating the experience. From your point of view as someone interested in developer education and DevRel, they are also early case studies in how to build communities around protocols rather than around single‑company apps.

Why SocialFi Matters for Builders and Writers

You might not plan to build a SocialFi app yourself, but understanding Farcaster and Lens matters if you care about Web3 beyond DeFi. First, they are frontrunners in what Vitalik and others have called “desoc” and decentralized social, and even large exchanges have noted that SocialFi is becoming a top priority for 2026, as seen in items like Binance’s commentary on SocialFi trends. That means more projects, more users, and more questions from people who will look for clear explanations.

Second, they are proving grounds for Web3 onboarding UX. Wallet flows, gas abstraction, identity management, and content moderation are all being tested in real time on these networks. Lessons learned there will eventually feed back into other app categories, from gaming to DAOs. Third, they are fertile ground for content and community work. Explaining how to get started, how the protocols differ, how to build small tools or bots, and how to navigate incentives without burning out are all valuable topics for someone positioning themselves in Web3 education and DevRel.

For your 60‑day journey, SocialFi sits nicely alongside topics like The Graph, IPFS, Bitcoin L2s, and upgradeability. It shows that Web3 is not only about infra and DeFi, but also about the everyday user experiences that can either bring people in or push them away.

Key Takeaways

SocialFi is not just “Twitter with a token.” At its core, it is about rebuilding social networks as open protocols, where identity, social graphs, and content are shared infrastructure instead of a single company’s property. Farcaster and Lens are two leading 2026 examples of this approach, with Farcaster emphasizing a protocol‑centric social graph and Lens modeling social relationships explicitly through smart contracts and NFTs.

Both projects are still iterating on critical questions around decentralization, moderation, UX, and incentives. For builders and writers, they are important to understand because they test how Web3 handles something more human and messy than a DEX: our social interactions. As you continue this 60‑day series, keep SocialFi in mind not just as a narrative, but as a living lab where many of the concepts you have already explored, from L2s to ZK to storage, are being applied in a much more visible and emotional context.

Resources

If you want to go deeper into SocialFi, Farcaster, and Lens:

  • Overview of decentralized social and SocialFi concepts:
    What Is Decentralized Social Media? DeSoc and SocialFi Explained
  • Comparative look at Farcaster and Lens in 2026:
    The Battle for Web3’s Social Graph: Why Farcaster and Lens Are Leading
  • Farcaster introductory explainer:
    Farcaster 101: Why the Web3 Social Giant is Changing Everything in 2026
  • Lens infrastructure and scaling update:
    Lens Chain Goes Live: Scaling SocialFi with Avail and zkSync
  • Broader context around SocialFi priorities in 2026:
    Vitalik Buterin places SocialFi as the top priority for 2026

Follow the Series

This article is part of my “60 Days of Web3” learning‑in‑public experiment.

  • Read the journey on Medium: @Ribhavmodi
  • Check the dev‑friendly versions and comments on Future: Ribhav on Future
  • Join the community on Telegram: Web3ForHumans

I am learning and building in public, one day at a time. If you have thoughts on SocialFi, Farcaster, Lens, or want to share your own experiments, I would love to hear them.


Power Automate – Building Readable Flows

They say the best code is written in a way that you don’t need to add comments, just good naming and design, and that had me thinking about Power Automate.

Power Automate is a literal flow chart, flow charts were created to make processes easier to read. So why can’t I understand what the hell some flows do!

And that’s what this blog is all about, how with a few small changes you can build your flows in a way that anyone can open and understand 😎

  1. Planning
  2. Naming
  3. Notes
  4. Design

1. Planning

This one is kind of obvious but still very important. The biggest reason flows become hard to read is lack of planning. If the developer takes a “figure it out as i go”, then the end result is disjointed and confusing.

So for every flow you should plan out a couple of key things:

  • What’s the start state of the flow
  • What’s the end state of the flow
  • What pieces of value happen within the flow

I recommend a lot more planning personally (Power Automate – 4 Steps to Building a Flow), but if you have these basic ones it will really help make the flow structured and consistent.

2. Naming

Naming things right will have probably the biggest impact on the readability of your flow, there are 2 golden rules:

  1. Describe in clear language (no abbreviations)
  2. Be consistent

I have 2 patterns I follow:

Variables

Variables need to clearly explain what data they are going to hold and the type of variable.

Variable Type Example
sVariable string sEmailSubject
iVariable integer iRowCount
bVariable boolean bFoundEntry
oVariable object oUserRecord
aVariable array aAllMatchedUsers

Actions

Actions have an additional requirement, what action it is. As all actions share the same connector image, if you remove the action type it can be really hard to read.

As example, how easy is it to read the below flow, how about can you tell me what the 2 actions do?

actions with no description

Guess what i do is create item, Guess what i do2 is delete item, quite a lot to get wrong lol

So actions must keep the original name of what they do. How much easier is below to read.

actions with descriptionb

Again with consistency, all mine are Action – description. I also ensure the description is detailed enough. I could just put Get Items – records, but by adding the filter query of over 10 days old I can get a lot better understanding of what the flow is trying to do.

3. Notes

Yes I know my first sentence made out that you dont need notes when you create clear solutions, but its not true. Notes are such an under utilized feature in Power Automate. You don’t need them everywhere, but there are a couple of times they can be really useful.

Expressions

If you have one of those beautiful expressions that use multiple functions to come up with an amazing solution (you know the ones 😎), without the developers insight and understanding these can be incredibly hard to understand let alone edit. Dropping in a comment that explains what the expression does and can make everything make sense (including for future you who has forgotten what the flow does too).

Solutions Through Pain

Again another one that we all experience. When something that should work just doesn’t and you have no idea why (lets be honest its normally Power Automates fault but anyway). Through blood, swet, tears, repeated trial and error you come up with a solution. This where a note is so important, not only does it explain what its doing, but it also explains why the expected way doesn’t. I’ve done it before where I refractored a flow, thought the original developer was mad and rebuilt the “right way”, only to find it didn’t work. I then had to suffer the same pain as the original developer to get it working again.

4. Design

Design is the probably the most difficult to nail, as it’s more of interition. But there are a couple of key call outs that good design should have to make the flow easy to read.

Keep It Short

This is kind of obvious but often the most common not done. Keeping your flows short makes understanding your flow so much easier. And there are 2 main ways to do this.

The obvious is to use less actions, and this can be done by:

  • Removing unnecessary actions like variables and composes
  • Using right actions like filters instead of conditions
  • Ordering your actions before and after branches

The last one is probably the most under utilised but can add make a flow so much easier to understand.
If you duplicate actions in branches it easy to think they could be very different paths, but if they are the same except for inputs, then moving to after the branch and using a coalesce() ensures everyone understands its the same.

move action

The other way is to split the flow into childflows. That way we make the actions more relatable. Though this can only be done when the actions are related and/or complete a specific piece of value. This rolls in nicely to the next design principle…

Grouping Actions

Actions that are directly related or complete a specific task should be grouped together. When you create your flow don’t think of it as one flow full of actions, think of it as stages, and each stage should have actions. This design principle has 4 levels:

  1. Ensure related actions are next to each other
  2. Name the actions in a related way
  3. Add the actions into a scope
  4. Move the actions to a childflow

The scope is the most common used but in my opinion should be the least used. It adds nesting (see later), and api calls, and in most cases naming in related way is just as good.

Simple Paths

And this is the big one, you need to keep your paths simple, and the 2 biggest impacts to flow paths are:

  • Branching
  • Nesting

Adding branches means that you are now trying to understand each decision on its own, losing focus on the overarching point of the flow. It also forces duplication of actions which again can mislead the developer in to thinking its differences are bigger.

Nesting is similar, with how Power Automate is displayed it is easy to lose what level you are in and the impact of each of those containers.

high nesting
Understanding what each of those nesting boxes means and how it impacts values can be incredibly difficult.

I strongly recommend you check out my previous blog about ‘The Direct Methodology’ which deep dives into how the other benefits of simple paths and techniques to enable it.

Think Like a Human

I see far to many flows that follow a very unnatural process path. I’m not saying you should copy the human process, but you need to follow the same logical steps a human would. My best example for this is handling 2 different outcomes in a loop.

Nearly everyone would make a flow like this:

pass fail loop

You loop over every interview form, if passed then send passed, if failed then send rejected. But is this how you would do it in real life? I suspect you would sort into 2 piles by pass and fail. Then work through the passed, then work through the failed.

pass fail stack

  • Easier to read ✅
  • Easier to read logs (i can find nth passed so much easier) ✅
  • Uses less api calls so maybe a ✅

If you make your process follow those human ‘common sense’ steps, then it is instinctively easier to understand.

As you can see there are some simple steps that add little development time that can make your flows so much easier to read, and trust me, future you will be so happy when debugging a critical issue 😎

 
😎 Subscribe to David Wyatt