Research, guides, and analysis from barcik.training
Actionable architectural patterns for building AI coding agents and agentic systems, extracted from production-grade architecture. Covers persistent memory, background consolidation, tool constraints, prompt economics, output calibration, security, multi-agent orchestration, and capability gating. Each chapter teaches one pattern with practitioner guidance.
Read the booklet →Four credible scenarios for the next 2–3 years of generative AI — continued scaling, efficiency revolution, financial correction, and plateau with regulation. Each scenario features an interactive visualization anchor, key data points, trigger signals, and role-specific implications. Includes a 2×2 synthesis matrix and a scenario planning worksheet for team exercises.
Read the booklet →A strategic guide for EU IT services providers navigating GenAI. Covers the economics of self-hosting LLMs vs APIs, viable business model pivots, the vendor ecosystem play, how AI transforms your own delivery model, EU AI Act compliance opportunities, and a practical 18-month roadmap. Grounded in real April 2026 pricing data.
Read the full guide →A comprehensive guide to configuring Claude Code across a multi-repo hub. Covers the three-layer persistent context system (CLAUDE.md, memory, permissions), CLI integrations with GitHub and AWS, cross-machine portability, and a detailed security analysis including defense-in-depth strategies for AI coding assistants.
Read the guide →Systematic evaluation of geopolitical biases in 7B-parameter language models from three origins (US, CN, EU). Tests 88 prompts across 7 categories using a multi-evaluator panel. Reveals asymmetric performance on sensitive topics and scripted deflection patterns.
View the report →Evaluates whether small language models can reliably assess the quality of their own outputs. Tests self-judgment accuracy across factual grounding, instruction following, safety boundaries, consistency, and tone — with accuracy ranging from 50% (1B) to 83% (27B).
View the report →Behavioral safety evaluation using Anthropic’s Bloom framework. Tests 11 risk behaviors including emotional bonding, social engineering assistance, self-preservation, corrigibility resistance, and covert goal pursuit. Scores range from 2.1 to 6.8 on a 10-point scale.
View the report →A collection of engaging short stories, each exploring a different cognitive bias — all written with generative AI. Interspersed with essays examining the nature of AI tools: copyright, creativity, job displacement, and the question of authorship. The AI holds up a mirror to human thinking, reflecting our own imperfections.
Read in English → Čítať po slovensky →