<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Writing on Machine Learning, Systems, and Engineering on Neural Odyssey</title><link>https://danialjfz.github.io/myblog/posts/</link><description>Recent content in Writing on Machine Learning, Systems, and Engineering on Neural Odyssey</description><generator>Hugo -- gohugo.io</generator><language>en</language><managingEditor>Danialj999@gmail.com (Danial Jafarzadeh)</managingEditor><webMaster>Danialj999@gmail.com (Danial Jafarzadeh)</webMaster><copyright>© 2026 Danial Jafarzadeh</copyright><lastBuildDate>Fri, 08 Aug 2025 00:00:00 +0000</lastBuildDate><atom:link href="https://danialjfz.github.io/myblog/posts/index.xml" rel="self" type="application/rss+xml"/><item><title>Neural Networks: Building Intuition Beyond the Math</title><link>https://danialjfz.github.io/myblog/posts/neural-networks-intuition/</link><pubDate>Thu, 20 Nov 2025 00:00:00 +0000</pubDate><author>Danialj999@gmail.com (Danial Jafarzadeh)</author><guid>https://danialjfz.github.io/myblog/posts/neural-networks-intuition/</guid><description>&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Abstract&lt;/strong&gt;&lt;br&gt;
This piece explains neural networks from the intuition upward: what a neuron is, why layers help, how gradient descent changes weights, and why backpropagation is less mystical than it sounds. It is written for readers who want the concepts to feel concrete before diving deeper into the math.&lt;/p&gt;</description></item><item><title>5 ML Mistakes I Made So You Don't Have To</title><link>https://danialjfz.github.io/myblog/posts/ml-mistakes-to-avoid/</link><pubDate>Wed, 19 Nov 2025 00:00:00 +0000</pubDate><author>Danialj999@gmail.com (Danial Jafarzadeh)</author><guid>https://danialjfz.github.io/myblog/posts/ml-mistakes-to-avoid/</guid><description>&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Abstract&lt;/strong&gt;&lt;br&gt;
This post is a checklist of failure modes that quietly ruin ML projects: bad data inspection, leakage, weak evaluation, class imbalance, and irreproducible experiments. The point is not to be dramatic about mistakes, but to make the debugging habits explicit before they cost days of work.&lt;/p&gt;</description></item><item><title>Build Your First ML Model: A No-BS Guide</title><link>https://danialjfz.github.io/myblog/posts/build-your-first-ml-model/</link><pubDate>Tue, 18 Nov 2025 00:00:00 +0000</pubDate><author>Danialj999@gmail.com (Danial Jafarzadeh)</author><guid>https://danialjfz.github.io/myblog/posts/build-your-first-ml-model/</guid><description>&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Abstract&lt;/strong&gt;&lt;br&gt;
This post walks through a first end-to-end ML workflow using a real image classification task. The goal is not just to train a model, but to build the habits that matter in practice: checking data, splitting correctly, choosing a simple baseline, and evaluating results without fooling yourself.&lt;/p&gt;</description></item><item><title>Welcome to Neural Odyssey</title><link>https://danialjfz.github.io/myblog/posts/welcome/</link><pubDate>Fri, 08 Aug 2025 00:00:00 +0000</pubDate><author>Danialj999@gmail.com (Danial Jafarzadeh)</author><guid>https://danialjfz.github.io/myblog/posts/welcome/</guid><description>&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Abstract&lt;/strong&gt;&lt;br&gt;
This opening post explains what Neural Odyssey is for: practical writing about machine learning, systems work, debugging, and the messy parts of learning in public. It sets the tone for the blog and the kind of posts that will be worth following.&lt;/p&gt;</description></item></channel></rss>