ETA Failures: How AI Fixes Itself

Waiting for a ride or food delivery can be frustrating when the app suddenly changes the estimated arrival time from 20 minutes to 45 minutes. Behind the scenes, AI systems use large models and data from traffic, weather, and past trips to make these predictions. However, unexpected events and imperfect…

Continue reading

Agentic AI

Agentic AI is a new kind of artificial intelligence designed to think and act on its own to achieve complex goals. Unlike traditional AI, which typically follows set rules or responds to specific commands, agentic AI makes decisions, plans its actions, completes tasks, and learns from the results—all without needing…

Continue reading

How LLMs Actually Learn to Predict ETA: Inside the Black Box

This post explores how LLM-driven systems transform simple travel predictions into something intelligent, accurate, and responsive. Classic ETA systems rely on basic formulas: distance, average speed, maybe a historical table showing usual delays. These approaches are serviceable in predictable conditions but often fall short. Unexpected traffic jams, road closures, unusual…

Continue reading

Prompt Injection Attacks

Prompt injection attacks are a growing problem in AI tools like chatbots and language models. They happen when someone adds or “injects” extra instructions or harmful content into a prompt to manipulate the AI. Learning how to protect AI systems from these attacks is important for anyone who builds or…

Continue reading

Vector Database vs Similarity Metric

A vector database is a specialized system for storing and searching high-dimensional data represented as vectors. In simple terms: It acts as a storage space for embeddings (numeric representations), which might come from texts, images, or audio. The main job of a vector database is to quickly find which stored vectors are most similar…

Continue reading

Small Fine-Tuned Models vs Large General LLMs

Modern natural language processing allows developers to choose between small fine-tuned language models and large general-purpose LLMs like GPT-4 or LLaMA. Both solutions have their strengths and trade-offs. Small Fine-Tuned Models Small fine-tuned models, sometimes called SLMs (Small Language Models), have fewer parameters—from several million up to a few billion. They are first trained…

Continue reading