Skip to main content

· 13 min read

⌚ Nice watch!

2024-12-15 Dependency Injection in C++ - A Practical Guide - Peter Muldoon - C++Now 2024 - YouTube { www.youtube.com }

image-20241215015248533

Long talk! Only list of the topics covered. I personally want to focus on "Inheritance and Virtual Functions" and "Template-Based Dependency Injection" with concepts. Concepts look really cool.

Methods of Dependency Injection

  • Link-Time Dependency Injection
    • Overview and explanation
    • Issues with link-time DI (fragility, undefined behavior, ODR violations)
    • Reasons to avoid link-time DI in modern systems
  • Inheritance and Virtual Functions
    • Base class and derived classes for DI
    • Interface-based DI (abstract interfaces)
    • Drawbacks (interface bloat, large interface sizes, tight coupling)
  • Template-Based Dependency Injection
    • Using templates to achieve DI
    • Benefits of compile-time DI
    • Concepts (C++20) for template constraints
    • Pros and cons of using templates for DI
  • Type Erasure (std::function)
    • Using std::function for DI
    • Flexibility and run-time benefits
    • Overhead and runtime costs of std::function
  • Null Object Pattern
    • Creating "null" objects for dependency injection
    • Use cases and benefits
    • How to use null objects for testing
  • Setter Injection
    • Description of setter-based DI
    • Problems with setter injection (state mutation, initialization order issues)
    • Why setter injection is generally avoided
  • Method Injection
    • Description of method-level DI
    • Pros (clearer interfaces) and cons (interface bloat)
  • Constructor Injection
    • Constructor-level DI for immutability
    • Best practices for constructor injection
    • Drawbacks (API changes, large constructor argument lists)
  • Dependency Suppliers (Factory Functions)
    • Using supplier functions to control dependency injection
    • How dependency suppliers differ from service locators

2024-12-14 Master Tailwind CSS Crash Course 2024 | not a tutorial - YouTube { www.youtube.com }

by Ankita Kulkarni

image-20241214155934482

/* Introduction */
// This document serves as a comprehensive reference sheet for key Tailwind CSS concepts and utilities.
// Each section focuses on a major topic, providing a functional code sample that covers its subtopics.
// Use this guide as a quick reference for essential Tailwind features.

/* 1. Core Concepts of Tailwind CSS */
<div class="container mx-auto p-6">
<h1 class="text-4xl font-bold mb-4">Core Concepts of Tailwind CSS</h1>
<p class="text-gray-600">This paragraph demonstrates text utilities, margin, and padding.</p>
<button class="bg-blue-500 hover:bg-blue-700 text-white font-bold py-2 px-4 rounded">
Click Me
</button>
</div>

/* 2. Responsive Design */
<div class="grid grid-cols-1 sm:grid-cols-2 md:grid-cols-3 gap-4 p-6">
<div class="bg-red-500 p-4">1</div>
<div class="bg-green-500 p-4">2</div>
<div class="bg-blue-500 p-4">3</div>
<div class="bg-yellow-500 p-4">4</div>
</div>

/* 3. Grid and Flexbox */
<div class="flex flex-col md:flex-row md:justify-between p-6">
<div class="bg-purple-500 p-4 flex-1">Flex Item 1</div>
<div class="bg-orange-500 p-4 flex-1">Flex Item 2</div>
<div class="bg-teal-500 p-4 flex-1">Flex Item 3</div>
</div>

<div class="grid grid-cols-2 md:grid-cols-4 gap-6 p-6">
<div class="bg-pink-500 h-20"></div>
<div class="bg-blue-500 h-20"></div>
<div class="bg-green-500 h-20"></div>
<div class="bg-red-500 h-20"></div>
</div>

/* 4. Padding, Margins, and Spacing */
<div class="p-10 m-10 bg-gray-100">
<h2 class="mb-6">Padding and Margin Example</h2>
<p class="py-4 px-6 bg-white shadow-lg rounded">This box has custom padding and margin.</p>
</div>

/* 5. Borders and Border Radius */
<div class="border-4 border-dashed border-blue-500 rounded-lg p-6 m-6">
<h2 class="text-xl font-bold">Dashed Border with Radius</h2>
<p class="mt-4">This container demonstrates border styles and border radius utilities.</p>
</div>

/* 6. Typography and Text Styling */
<div class="p-6">
<h1 class="text-4xl font-extrabold underline decoration-pink-500">H1 Header</h1>
<h2 class="text-3xl font-semibold mt-4">H2 Header</h2>
<p class="text-base text-gray-700 leading-relaxed mt-2">This is a paragraph demonstrating text styling like font size, color, and line height.</p>
</div>

/* 7. Customizing Colors */
<div class="bg-custom-purple text-white p-6">
<h2 class="text-xl">Custom Color</h2>
<p>Custom colors can be configured in tailwind.config.js</p>
</div>

/* 8. Box Shadows and Drop Shadows */
<div class="shadow-lg p-6 m-6 bg-white rounded-lg">
<h2 class="font-bold">Box Shadow Example</h2>
<p>This container has a large box shadow applied to it.</p>
</div>

/* 9. Customizing Animations and Transitions */
<button class="bg-blue-500 hover:bg-blue-700 text-white font-bold py-2 px-4 rounded transition duration-300 ease-in-out transform hover:scale-105">
Hover Me
</button>

/* 10. Images and Transformations */
<img src="/path/to/image.jpg" class="w-64 h-64 object-cover rounded-full transform rotate-12">

/* 11. State Management */
<input type="text" placeholder="Focus Me" class="focus:outline-none focus:ring-2 focus:ring-blue-500 p-2 border border-gray-300 rounded">

/* 12. Dark Mode in Tailwind */
<div class="dark:bg-gray-800 dark:text-white p-6">
<h2 class="text-xl">Dark Mode Example</h2>
<p>This text changes color in dark mode.</p>
</div>

/* 13. Filters and Effects */
<img src="/path/to/image.jpg" class="w-64 h-64 filter grayscale hover:grayscale-0 transition duration-300">

/* 14. Custom Utility Classes */
<div class="custom-button bg-blue-500 text-white font-bold py-2 px-4 rounded">
Custom Button
</div>

/* 15. Advanced Layout Techniques */
<div class="max-w-4xl mx-auto p-6">
<h2 class="text-2xl font-bold mb-4">Advanced Layout</h2>
<div class="flex justify-center">
<div class="w-1/2 bg-red-500 p-4">50% Width</div>
</div>
</div>

/* 16. Gradients and Backgrounds */
<div class="bg-gradient-to-r from-purple-400 via-pink-500 to-red-500 text-white p-6 rounded-lg">
<h2 class="text-xl font-bold">Gradient Background</h2>
<p>This container has a beautiful gradient background.</p>
</div>

/* 17. Customizing Layouts */
<div class="grid grid-cols-2 gap-4">
<div class="bg-blue-500 h-20"></div>
<div class="bg-green-500 h-20"></div>
<div class="bg-red-500 h-20"></div>
<div class="bg-yellow-500 h-20"></div>
</div>

/* 18. Project Walkthrough */
<div class="p-6">
<h2 class="text-2xl font-bold mb-4">Project Walkthrough</h2>
<p class="text-gray-600">This project demonstrates how all the Tailwind concepts come together to create a cohesive layout.</p>
</div>

/* 19. Additional Resources */
<div class="p-6 bg-gray-100">
<h2 class="text-xl font-bold">Resources</h2>
<ul class="list-disc pl-6">
<li>Official Tailwind CSS Documentation</li>
<li>VS Code Tailwind IntelliSense Plugin</li>
<li>Learning Responsive Design and Dark Mode</li>
</ul>
</div>

2024-12-10 What's a Tensor? - YouTube { www.youtube.com }

image-20241209163109409 image-20241209163220854

2024-11-28 Playing Game on the Mall Wall: Japanese Man's Super-sized Adventure! - YouTube { www.youtube.com } image-20241127171556754

Nomad Push is a 38-year-old Japanese man who’s homeless and travels all over Japan. On his YouTube channel, he shares his daily life in a really honest and down-to-earth way. You’ll see him doing things like:

  • Sleeping in train stations
  • Exploring abandoned houses
  • Cooking simple meals in parks

Even though he’s dealing with tough times, his videos feel positive and show a side of life most people don’t get to see. A lot of people watching his channel say it’s inspiring, and he’s built a big community of fans who support him. When he hit 100,000 subscribers, another YouTuber, Oriental Pearl, even threw a celebration for him, which shows how much people believe in him.

If you’re learning Japanese, this channel is a goldmine. His videos are full of real Japanese conversations, and he adds subtitles to help viewers follow along. It’s great practice for understanding how people actually talk in Japan.

Nomad Push’s channel is like a window into his life and a journey across Japan at the same time. It’s simple, real, and worth checking out if you’re curious about a different way of seeing the world.

2024-12-02 A Day in the Life of a Japanese Hikikomori (Shut In) - YouTube { www.youtube.com }

image-20241201211155692

Inside the Life of Nito: A Hikikomori Turned Game Developer

Nito, a hikikomori living in Kobe, Japan, has spent the past decade in near-total isolation. Far from idle, he has dedicated the last five years to developing Pull Stay, an old-school beat-em-up game reflecting his experiences as a recluse. The protagonist, a hikikomori himself, battles societal judgment—a theme close to Nito’s heart. Using Unreal Engine, he has self-taught coding, 3D design, and storytelling to bring his vision to life.

A Creative Path Born from Setbacks After graduating from the University of Tokyo, Nito struggled to find his footing in traditional creative fields like writing and doujinshi (independent manga). He shifted to game development when tools like Unreal Engine became accessible. Despite the steep learning curve and his limited English skills, Nito found purpose in creating something meaningful on his own terms.

Breaking Stereotypes and Defying Odds Nito’s life defies the typical hikikomori stereotype of idleness and dependence. His determination and self-taught skills showcase resilience, proving isolation doesn’t equate to lack of ambition. Through Pull Stay, he turns personal struggles into a story that others can relate to and enjoy.

What’s Next? With Pull Stay nearing release on Steam, Nito hopes its success will enable him to collaborate with other creators and travel the world. If it doesn’t take off, he plans to use the game as a portfolio to break into the industry. For now, his story serves as an inspiring reminder of the power of creativity and persistence.

Support Nito by checking out Pull Stay on Steam or sharing his journey with others.

https://store.steampowered.com/app/1179890/Pull_Stay/

image-20241201211045494

2024-11-30 10% Of Engineers Should Get Fired - YouTube { www.youtube.com }

image-20241129170004213

What Are Ghost Engineers? Ghost engineers are unproductive employees contributing less than 10% of a median engineer’s output. They account for up to 10% of the workforce and cost companies $90 billion annually. These individuals often perform minimal tasks, such as making fewer than three commits a month or trivial changes, while collecting full salaries.

Key Insights:

  • Economic Impact: Eliminating ghost engineers could save companies billions and add $465 billion to market caps without reducing performance.
  • Remote Work Paradox: While top engineers excel remotely, the worst also thrive in remote settings. 14% of remote engineers are ghost engineers compared to 6% in-office.
  • Cultural Cost: Ghost engineers demoralize motivated teammates and occupy roles that could go to skilled newcomers.
  • Startups’ Advantage: Startups avoid this issue by demanding accountability from every team member, contributing to their ability to outperform larger organizations.

Why It Matters: Ghost engineers don’t just waste money—they stall innovation, hinder team dynamics, and damage the credibility of remote work. Companies have a unique chance during layoffs to address this inefficiency, open doors to fresh talent, and foster a culture of accountability.

The Way Forward: Fire unproductive workers, improve performance metrics, and rebuild trust in remote work by ensuring accountability. The tech industry’s future depends on tackling this hidden crisis.

Sources:

2024-11-30 Yegor Denisov-Blanch on X: "I’m at Stanford and I research software engineering productivity. We have data on the performance of >50k engineers from 100s of companies. Inspired by @deedydas, our research shows: ~9.5% of software engineers do virtually nothing: Ghost Engineers (0.1x-ers) https://t.co/uygyfhK2BW" / X { x.com }

image-20241129171111447

2024-11-30 Tech's $90B Ghost Engineer Problem: Stanford Study Finds 9.5... { socket.dev }

Das highlighted a few tools of the trade from the “quiet quitting” playbook:

  • “in a meeting” on slack
  • scheduled slack, email, code at late hours
  • private calendar with blocks
  • mouse jiggler for always online
  • “this will take 2 weeks” (1 day)
  • “oh, the spec wasn’t clear”
  • many small refactors
  • “build is having issues”
  • blocked by another team
  • will take time bcuz obscure tech reason like “race condition”
  • “can you create a jira for that?”

2024-12-07 Keynote: Advent of Code, Behind the Scenes - Eric Wastl - YouTube { www.youtube.com }

image-20241206204142279

image-20241206205615587

Hello friends! My name is Eric Wasel, and Advent of Code is a project I created to help programmers improve their skills through small, self-contained challenges. The puzzles start easy and get progressively harder, helping you learn new techniques and develop problem-solving skills. I believe the best way to learn is by solving specific problems, and this project reflects that. We even have C++ in Advent of Code, and I’ll touch on where and how during the talk. Drawing from my experience designing systems for ISPs, auction infrastructure, and marketplaces, Advent of Code is all about celebrating learning, curiosity, and the joy of programming for everyone, no matter their level.

2024-12-07 To Int or To Uint - Alex Dathskovsky - YouTube { www.youtube.com } {C++}

image-20241206224226483

image-20241207113322117

This talk provides valuable insights into handling integers in C++. Integers are fundamental in any program, but improper handling can lead to subtle bugs, undefined behavior, and poor performance. This content explores the complexities of signed and unsigned integers, common mistakes, and how to optimize performance. By understanding these nuances, you'll avoid common pitfalls, write more efficient code, and improve the overall robustness of your applications.

The Basics of Signed and Unsigned Integers

Representation in Memory

  • Unsigned Integers: Simple modulo 2 representation. Overflow behavior is well-defined, which means operations that exceed the maximum value wrap around predictably.
  • Signed Integers: Historically, C++ supported various representations like one’s complement and two’s complement. Since C++20, two’s complement is the standard. Overflow is undefined, and operations involving signed integers require careful handling to avoid unexpected behavior.

Performance Considerations Signed integers often involve additional steps in assembly code, such as preserving the sign bit during division or right shifts. This makes operations on signed integers slower compared to their unsigned counterparts, especially in performance-critical code. For example, unsigned division by two can be replaced by a simple bit shift. Signed division, on the other hand, requires arithmetic shifts that preserve the sign bit, adding extra overhead.

Best Practices for Handling Integers

Use Fixed-Width Integer Types Explicitly use types like int32_t, uint64_t, and size_t when appropriate. These make your code portable and clear about the expected range of values.

Prefer Signed Types Unless Necessary Unsigned integers should only be used when their wrapping behavior is explicitly desired. For most use cases, signed integers are safer and less prone to subtle bugs.

Leverage C++20 and C++23 Features Modern C++ provides tools like std::ssize and type traits that simplify working with integers. Use these features to avoid common pitfalls and ensure correctness.

Treat Warnings as Errors Enable strict compiler warnings (-Wall, -Wextra, and -Werror) and sanitizers to catch potential issues early. Compiler tools can often detect problems like signed-unsigned mismatches before they cause runtime errors.

Avoid Overusing auto While auto simplifies code, it can obscure type information, leading to unexpected behavior. Be explicit with integer types, especially in loops and arithmetic operations.

Author: Alex Dathskovsky

2024-12-07 Demystifying CRTP in C++: What, Why, and How { www.cppnext.com }

2024-12-07 Exposing the not-so-secret practices of the cult of DDD - Chris Klug - - YouTube { www.youtube.com }

image-20241207150005664 image-20241207151010087

image-20241207151736502

image-20241207151910484

2024-12-07 Bosses Are FIRING Gen Z Workers Just Months After Hiring Them. - YouTube { www.youtube.com }

image-20241207154832440

Source: 2024-12-07 1 in 6 Companies Are Hesitant To Hire Recent College Graduates - Intelligent { www.intelligent.com }

· 42 min read

Good Reads

2024-12-01 Legacy Shmegacy - David Reis on Software { davidreiscto.substack.com }

image-20241201132743791

People call some code legacy when they are not happy with it. Usually it simply means they did not write it, so they don’t understand it and don’t feel safe changing it. Sometimes it also means the code has low quality1 or uses obsolete technologies. Interestingly, in most cases the legacy label is about the people who assign it, not the code it labels. That is, if the original authors were still around the code would not be considered legacy at all.

This model allows us to deduce the factors that encourage or prevent some code from becoming legacy:

  1. The longer are programmer’s tenure the less code will become legacy, since authors will be around to appreciate and maintain it.
  2. The more code is well architected, clear and documented the less it will become legacy, since there is a higher chance the author can transfer it to a new owner successfully.
  3. The more the company uses pair programming, code reviews, and other knowledge transfer techniques, the less code will become legacy, as people other than the author will have knowledge about it.
  4. The more the company grows junior engineers the less code will become legacy, since the best way to grow juniors is to hand them ownership of components.
  5. The more a company uses simple standard technologies, the less likely code will become legacy, since knowledge about them will be widespread in the organization. Ironically if you define innovation as adopting new technologies, the more a team innovates the more legacy it will have. Every time it adopts a new technology, either it won’t work, and the attempt will become legacy, or it will succeed, and the old systems will.

The reason legacy code is so prevalent is that most teams are not good enough at all of the above to avoid it, but maybe you can be.

🥒 2024-12-01 Tech's $90B Ghost Engineer Problem: Stanford Study Finds 9.5... { socket.dev }

Beyond the economic and productivity concerns, ghost engineers pose significant security risks. Their lack of meaningful engagement can lead to a few critical issues: unreviewed or improperly tested code changes, unnoticed vulnerabilities, and outdated systems left unpatched. A disengaged engineer might also miss—or deliberately ignore—critical security protocols, creating potential entry points for malicious actors.

When these engineers aren't actively involved in maintaining secure practices, they can create blind spots in a company’s defense strategy, increasing the risk of breaches or compliance failures. Threat actors can exploit disengaged engineers through phishing, social engineering, or leveraging neglected updates and poorly reviewed code to infiltrate systems and compromise security. Addressing these gaps requires better oversight and collaborative practices.

Before you start side-eyeing your coworkers, it’s worth noting that measuring productivity in software engineering is notoriously tricky. Commit counts or hours logged are often poor indicators of true impact. Some high-performing engineers—the mythical “10x engineers”—produce significant results with fewer, well-thought-out contributions.

However, the “ghost engineer” trend exposes systemic inefficiencies in talent management and performance evaluation. Remote work policies, once heralded as a game-changer, are now under the microscope. They’ve enabled flexibility for many but have also given rise to the ghost engineering phenomenon. The tug-of-war over remote versus in-office work is likely to intensify as companies grapple with these kinds of leadership and accountability issues.

image-20241201002539567

2024-11-30 The deterioration of Google { www.baldurbjarnason.com }

I'm Baldur Bjarnason, a web developer and writer. In my latest essay, I wrote about the decline of Google and its impact on independent publishers.

image-20241130153023657

Here's a quick summary:

  1. Independent Publishers Struggling: Many independent sites are shutting down due to a lack of traffic from Google and Facebook.
  2. Google's Machine Learning Issues: Google's attempt to improve search results with machine learning has backfired, letting spam through and delisting quality content.
  3. Economic Impact: Even frugally run sites can't survive on the remaining traffic, leading to significant financial struggles for creators.
  4. Algorithm Black Box: Google's algorithm has become so complex that even their engineers can't fully understand or fix it.
  5. Monopoly Power: Google's monopoly allows it to capture value without improving product utility, leaving users with fewer alternatives.

2024-11-30 15 Lessons From 15 Years of Indie App Development { lukaspetr.com }

Hey there, I'm Lukas Petr, an indie iOS app developer from Prague. Over the past 15 years, I've learned a lot about the ups and downs of indie app development. Here are some key takeaways:

image-20241130153146170

  1. Enjoy the Process: Loving what you do is crucial. If you don't enjoy the journey, it will be tough to stick with it.
  2. Understand Your Motivation: Know why you're doing this. For me, it's about creating something meaningful and useful.
  3. Risk and Reward: The risk is high, but the reward of fulfilling work and ownership is worth it.
  4. Find Your Niche: Focus on what you believe in and what scratches your own itch.
  5. Provide Additional Value: Aim for sustainable value over time, not just quick gains.
  6. Wear Many Hats: Be prepared to handle everything from development to marketing.
  7. Reflect Regularly: Regular introspection helps you stay on track and improve.
  8. Learn and Apply Lessons: Keep evolving and improving based on your experiences.
  9. Find Support: Surround yourself with people who can help propel you forward.
  10. Luck: Sometimes, success involves a bit of luck, but you have to put yourself out there.

I hope you find these insights helpful. If you're pursuing any creative endeavor, I'm rooting for you! Feel free to reach out if you have any questions or comments.

2024-11-24 A career ending mistake — Bitfield Consulting { bitfieldconsulting.com }

A career-ending mistake isn't always a catastrophic error like shutting down a nuclear power station or deleting a production database; it's often subtler, like failing to plan for the end of your career. The article explores how many of us rush through our professional lives without a clear destination, highlighting that "career" itself can mean "to rush about wildly." It asks the critical questions: “Where do you want to end up? And is that where you're currently heading?” Instead of drifting, the piece advises us to define what we truly want, as "The indispensable first step to getting what you want is this: decide what you want." Whether you're content in your current role or seeking something more fulfilling, understanding your end goal and working intentionally toward it is key to avoiding a career that feels out of control.

Fun quote:

Engineering managers need a solid foundation of technical competence, to be sure, but the work itself is primarily about leading, supervising, hiring, and developing the skills of other technical people. It turns out those are all skills, too, and relatively rare ones.

Managing people is hard; much harder than programming. Computers just do what you tell them, whether that’s right or wrong (usually wrong). Anyone can get good at programming, if they’re willing to put in enough time and effort. I’m not sure anyone can get good at managing, and most don’t. Most managers are terrible.

That’s quite a sweeping statement, I know. (Prove me wrong, managers, prove me wrong.) But, really, would a car mechanic last long in the job if they couldn’t fit a tyre, or change a spark plug? Would a doctor succeed if they regularly amputated the wrong leg? We would hope not. But many managers are just as incompetent, in their own field, and yet they seem to get away with it.

2024-11-23 the tech utopia fantasy is over | ava's blog { blog.avas.space }

Growing up, I had a positive view of tech, believing it would bring comfort, less work, and personalized assistance. However, the reality has been different, with tech companies failing to deliver on their promises and instead contributing to issues like disinformation, economic inequality, and environmental harm. While there have been some benefits, such as increased political knowledge and social connections, the negatives now overshadow the positives. The tech utopia fantasy is truly dead to me.

2024-11-18 Good software development habits | Zarar's blog { zarar.dev }

  1. Keep Commits Small: Keep each commit focused on a single change to make it easier to track and revert issues. Code that compiles should be committable.
  2. Refactor Continuously: Follow Kent Beck's advice: make changes easy, then make the easy changes. Frequent, small refactorings prevent complex reworks.
  3. Deploy Regularly: Treat deployed code as the only true measure of progress. Frequent deployments ensure code reliability.
  4. Trust the Framework: Don’t test features already covered by the framework; focus on testing your unique functionality, especially with small components.
  5. Organize Independently: If a function doesn’t fit anywhere, create a new module. It’s better to separate logically independent code.
  6. Write Tests First (Sometimes): If unsure about an API’s design, start with tests to clarify requirements. TDD doesn’t have to be strict—write code in workable chunks.
  7. Avoid Duplication After the First Copy-Paste: If code is duplicated, it’s time for an abstraction. Consolidating multiple versions is harder than parameterizing one.
  8. Accept Design Change: Designs inevitably get outdated. Good software development is about adapting to change, not achieving a “perfect” design.
  9. Classify Technical Debt: Recognize three types of technical debt: immediate blockers, future blockers, and potential blockers. Minimize the first, address the second, and deprioritize the third.
  10. Prioritize Testability in Design: Hard-to-test code hints at design issues. Improve testability through smaller functions or test utilities to avoid skipping tests.

🔥2024-11-14 Lessons from my First Exit · mtlynch.io { mtlynch.io }

Selling my first business was a journey filled with excitement, stress, and invaluable lessons. I want to share my experiences to help other entrepreneurs who might be considering a similar path. This post is especially relevant for small business owners and startup founders looking to navigate the complexities of a business exit.


Quote:

Used dedicated accounts for the business

Part of what made TinyPilot’s ownership handoff smooth was that its accounts and infrastructure were totally separate from my other business and personal accounts:

  • I always sent emails related to the business from my @tinypilotkvm.com email address.
  • I always used @tinypilotkvm.com email addresses whenever signing up for services on behalf of TinyPilot.
  • I kept TinyPilot’s email in a dedicated Fastmail account.
    • This wasn’t true at the beginning. TinyPilot originally shared a Fastmail account with my other businesses, but I eventually migrated it to its own standalone Fastmail account.
  • I never associated my personal phone number with TinyPilot. Instead, I always used a dedicated Twilio number that forwarded to my real number.
  • All account credentials were in Bitwarden.

After closing, handing over control was extremely straightforward. I just added the new owner to Bitwarden, and they took over from there. There were a few hiccups around 2FA codes I’d forgotten to put in Bitwarden, but we worked those out quickly.


For example, TinyPilot uses the H.264 video encoding algorithm. It’s patented, so we had to get a license from the patent holder before we shipped that feature. During due diligence, we discovered that the patent license forbade me from transferring the license in an asset sale.

I immediately started imagining the worst possible outcome. What if the patent holder realizes they can block the sale, and they demand I pay them $100k? What if the patent holder just can’t be bothered to deal with a tiny business like mine, and they block the sale out of sheer indifference?

🔥 2024-11-08 Blog Writing for Developers { rmoff.net }

Like a favourite pair of jeans that’s well-worn, comfy, and slightly saggy round the arse, I have a go-to structure for writing. Come to think of it, I use it for lots of conference talks too. It looks like this:

  1. Tell them what you’re going to tell them
  2. Tell them
  3. Tell them what you told them

What this looks like in practice is something along these lines:

  1. An intro

    What is this thing, and why should the reader give af be interested?

    This could be a brief explanation of why I am interested in it, or why you would want to read my take on it. The key thing is you’re relating to your audience here. Not everyone wants to read everything you write, and that’s ok.

    Let people self-select out (or in, hopefully) at this stage, but make it nice and easy. For example, if you’re writing about data engineering, make it clear to the appdev crowd that they should move on as there’s nothing to see here (or stick around and learn something new, but as a visitor, not the target audience).

  2. The article itself

  3. A recap

    Make sure you don’t just finish your article with a figurative mic drop—tie up it nicely with a bow (a 🙇🏻 or a 🎀, either works).

    This is where marketing would like to introduce you to the acronym CTA (Call To Action) 😉. As an author you can decide how or if to weave that into your narrative.

    Either way, you’re going to summarise what you just did and give people something to do with it next. Are there code samples they can go and run or inspect? A new service to sign up for? A video to watch? Or just a general life reflection upon which to ponder.

2024-11-07 Monorepo - Our experience { ente.io }

We switched to a monorepo nine months ago, and it’s been working well for us. Before, we had multiple repositories, which made things like managing pull requests or syncing changes a hassle. With everything in one place now, the workflow feels smoother and simpler. It wasn’t a decision we overanalyzed; it just felt like the right time to try it, and we’ve been happy with the results.

The main pros? First, there’s less repetitive work. Instead of opening multiple pull requests across repos for a single change, now it’s just one. Submodules, which were always a pain to manage, are mostly gone. Everything that needs to work together stays in sync naturally. Refactoring has also become easier because we can see the whole picture in one place, which encourages code improvements over time. Plus, being in the same repo has made us feel more connected as a team. Even small things, like seeing everyone’s changes when pulling updates, help us stay in the loop without extra effort.

As for cons, we honestly haven’t found many. A common concern is that monorepos can get messy or slow as they grow, but for our small team, it hasn’t been an issue. We kept it simple—no strict rules, just “don’t touch the root folder”—and it’s been fine. It might not work the same for larger teams or projects with different dynamics, but for us, it’s been a clear win.

2024-10-14 LogLog Games { loglog.games }

I spent three years using Rust for game development, and after shipping a few games and writing over 100,000 lines of code, I’m stepping away from it. Rust has some great qualities—its performance is top-notch, and it often lets you refactor confidently. But for fast, iterative development, which is crucial for indie games, it just doesn't align well. The borrow checker and Rust’s strictness often force unnecessary refactoring, slowing down the process of prototyping and testing new ideas. Tools like hot reloading, essential for quick feedback loops, are either clunky or nonexistent in Rust. And while the language excels in many technical areas, its game development ecosystem is still young, with fragmented solutions and limited support for things like GUI and dynamic workflows.

For small teams like ours, the priority is delivering fun, polished games quickly. With Rust, I found myself spending more time fighting the language and its ecosystem than focusing on gameplay. Moving forward, we’re transitioning to tools that better support rapid iteration and creativity, even if they’re less "perfect" on paper.

2024-09-29 It's hard to write code for computers, but it's even harder to write code for humans · Erik Bernhardsson { erikbern.com }

image-20241201135538368

Onboarding is Key: Users should get started quickly and see results fast. Fix: Simplify setup. Remove steps and make the tool easy to use immediately. For example, ensure API tokens are ready without extra configuration. The faster users see success, the more likely they’ll stick around.

Show Examples First: Abstract explanations confuse users. Fix: Use examples instead of long concepts. Show how the tool works with real use cases. When I write docs, I always start with practical examples users can copy and tweak.

Errors Need Solutions: Errors frustrate users. Fix: Make error messages helpful. Suggest fixes and show code snippets. A clear path back to success turns frustration into trust.

Avoid Too Many Ideas: Too much upfront information overwhelms users. Fix: Keep it simple. Focus on a few core ideas to start. When I design a tool, I aim for 3-5 basic concepts that cover most use cases. Fewer concepts, fewer headaches.

Use Familiar Terms: New words confuse people. Fix: Use common terms like "function" instead of inventing new ones. I think about how people already think about code and try to fit my tool into their existing mental model.

Flexibility Matters: Rigid tools frustrate creative users. Fix: Let users program their own solutions with APIs or scripts. Make everything programmable so users can adapt the tool to their needs.

Don’t Overdo Magic: Hidden behaviors often fail in edge cases. Fix: Keep defaults clear and reliable. Avoid adding unnecessary complexity. Unless I’m 99% sure a “magic” behavior will always work, I avoid it. Instead, I focus on being predictable.

Clarity Over Brevity: Short, clever code is hard to read. Fix: Write clear, readable code. Make it easy to follow. I remind myself: people read code far more than they write it.

2024-09-29 Too much efficiency makes everything worse: overfitting and the strong version of Goodhart’s law | Jascha’s blog { sohl-dickstein.github.io }

When you optimize too much, you can make things worse instead of better. This is the essence of the strong version of Goodhart’s Law: when a measure becomes the target, over-optimization can degrade what you originally cared about. This principle, often studied as "overfitting" in machine learning, also applies broadly to systems like education, economics, and governance.

The Problem: When proxies (measurements or secondary goals) are optimized too well, the actual outcomes worsen. For instance, standardized testing shifts focus from genuine learning to test preparation, undermining education. Similarly, rewarding scientists for publications incentivizes trivial or false findings over meaningful progress. Overfitting to proxies creates harmful side effects, from filter bubbles in social media to inequality in capitalism. How to Fix It: Lessons from Machine Learning

  1. Better Alignment: Make proxies closer to real goals. In machine learning, this involves better data collection. In broader systems, it means crafting laws, incentives, and norms that encourage genuine outcomes, like prioritizing long-term learning over test scores.
  2. Regularization: Introduce penalties or costs for extreme behaviors. Just as machine learning uses mathematical constraints, systems can add friction:
    • Tax extreme wealth disparities or excessive lawsuits.
    • Impose costs for high-volume actions, like bulk emails or algorithmic trading.
    • Penalize complexity to discourage harmful optimization.
  3. Inject Noise: Add randomness to disrupt harmful optimization. Examples include:
    • Randomized selection in competitive admissions to reduce over-preparation.
    • Random trade processing delays to stabilize financial markets.
    • Unpredictable testing schedules to encourage holistic studying.
  4. Early Stopping: Halt optimization before it spirals out of control. In systems, this could mean:
    • Capping time spent on decision-making relative to its stakes.
    • Freezing certain information flows, like press blackouts before elections.
    • Splitting monopolies to prevent market over-consolidation.
  5. Restrict or Expand Capabilities:
    • Restrict: Limit system capacities to prevent runaway effects, like capping campaign finances or AI training resources.
    • Expand: In some cases, more capacity reduces trade-offs, such as developing clean energy or transparent information systems.
BibTeX entry for post:
@misc{sohldickstein20221106,
  author = {Sohl-Dickstein, Jascha},
  title = {{ Too much efficiency makes everything worse: overfitting and the strong version of Goodhart's law }},
  howpublished = "\url{https://sohl-dickstein.github.io/2022/11/06/strong-Goodhart.html}",
  date = {2022-11-06}
}

2024-09-29 Measuring Developers' Jobs-to-be-done - by Abi Noda { substack.com }

2024-09-29 Measuring Developers' Jobs-to-be-done | Hacker News { news.ycombinator.com }

Google used to measure how well developer tools worked by evaluating how they supported certain tasks, like "debugging" or "writing code." However, this approach often lacked specificity that would be useful for tooling teams. For instance, "searching for documentation" is a common task, but the reason behind it—whether it's to "explore technical solutions" or "understand the context to complete a work item"—can meaningfully change a developer's experience and how well tools support them in achieving their goal.

To provide better insights, Google researchers identified the key goals developers are trying to achieve in their work and developed measurements for each goal. In this paper, they explain their process and share an example of how this new approach has benefited their teams.

image-20241201140955980

2024-10-05 Bureaucrat mode - by Andrew Chen - @andrewchen { andrewchen.substack.com }

As companies scale, they often shift from the agile, conviction-driven "Founder mode" to "Bureaucrat Mode," where decision-making slows, and processes dominate. While startups thrive on speed and direct action, large organizations tend to create committees, expand scopes, and reward consensus over outcomes. These tendencies, while rooted in good intentions like collaboration and stability, can cripple innovation and efficiency when scaled excessively.

The Problem: Bureaucrat Mode emerges as companies grow, driven by processes meant to manage complexity. However, these processes often become self-perpetuating, encouraging behaviors that prioritize internal metrics, visibility, and team expansion over meaningful results. Bureaucrats, focused on navigating processes rather than solving problems, replicate themselves by hiring others who thrive in such environments. This cycle of self-replication entrenches inefficiency and resistance to change.

image-20241201141318646

2024-10-10 How to make Product give a shit about your architecture proposal – Andy G's Blog { gieseanw.wordpress.com }

When dealing with Product teams about your architecture proposal, picture yourself as a plumber who's trying to sell different service packages. This analogy highlights how you should present your technical proposals to Product in a way that aligns with their focus on business value. They’re not interested in technical jargon; they want to know how your architecture decision translates into a return on investment.

Remember that Product people are looking for results. Instead of overwhelming them with details about OLTP systems or ETL processes, you need to frame your explanation as a negotiation — highlighting the costs and benefits of each option, just like the plumber did with his service packages.

"Product doesn’t give a shit about how your data is stored. Product cares about products."

The essence here is to avoid diving into the weeds of indexes or table joins until they understand the impact on their budget and timeline. When they ask, “Why is this so expensive?” that’s your cue to explain, in clear terms, the complexity involved in implementing things like OLAP systems or setting up ETL processes.

Approach your conversation by outlining different “packages” — starting with the 🥇 platinum package that covers all technical needs but at a higher cost. This sets the stage for a value discussion, where Product sees the full picture and starts to understand the trade-offs involved.

"Now you can (gently) talk to them about the difference between online transaction processing systems (OLTP) and online analysis processing systems (OLAP)."

The trick is to guide Product through a step-by-step decision-making process, laying out each feature as a line item on an invoice. This approach helps them grasp which elements of your proposal can be trimmed down or delayed to fit within their budget constraints. For example, if they can't afford a new OLAP system, offer scaled-down options, and negotiate on scope and time rather than quality.

🔥 One of the most crucial points is not to compromise on quality. In software development, you should avoid falling into the trap of lowering standards just to meet short-term goals. Sacrificing quality often leads to delivering subpar products that can damage customer satisfaction in the long run. As the article states, “What’s worse, delivering something a customer actually hates, or delivering nothing at all?” Maintaining a baseline of quality ensures that even with limited resources, you're delivering something worthwhile.

If the Product team suggests cutting corners to fit the project into a two-week sprint, resist the temptation. The iron triangle of software development — time, scope, and budget — should always consider quality as a non-negotiable factor.

Ultimately, you're helping Product to ruthlessly prioritize tasks to deliver the best possible outcomes within the given constraints. In these negotiations, scope will often be the main variable that can be adjusted to balance the budget and timeline. And when the tables turn, and it’s your idea that needs their buy-in, present it in terms of ROI to make a compelling case.

Think like a plumber: when you know the value of what you’re selling, it’s easier to convince others to invest in the right solution instead of a quick fix. Always push for a solution that maintains a minimum level of quality, even if it means delivering less within the same time frame.

2024-11-03 Get Me Out Of Data Hell — Ludicity { ludic.mataroa.blog }

📹 2024-11-03 Nikhil Suresh - Skills that programmers need, to defend both their code and their careers - YouTube { www.youtube.com }

This blog narrates an engineer's daily struggle with an overly complex and inefficient data warehouse system. Despite working within an ostensibly supportive team, the engineer describes their workplace as a "Pain Zone," rife with convoluted processes, unchecked errors, and cultural dissonance. Here’s a detailed breakdown of the main points:

The story begins with a ritual of starting the day with a senior engineering partner. Together, they embark on a shared mission to navigate the "Pain Zone," their term for the warehouse system plagued by unnecessary complexity. The data warehouse in question involves copying text files from different systems, and ideally, this process should require only ten steps. However, the engineer discovers over 104 discrete operations in the architecture diagram, a staggering example of the platform's inefficiency.

"Retrieve file. Validate file. Save file. Log what you did. Those could all be one point on the diagram...That's ten. Why are there a hundred and four?"

The engineer describes the necessity of "Pain Zone navigation," a practice where engineers rely on pair programming for moral support to withstand the psychological toll of working in such an environment. The issue isn’t only technical; it’s deeply cultural. A culture that demands velocity while disregarding craftsmanship fosters an atmosphere where complexity and inefficiency go unchallenged. This attitude, the author suggests, results in the degradation of code quality, with engineers penalized for trying to refactor code.

To illustrate the dysfunction further, the author recounts a routine task: checking if data from sources like Google Analytics is flowing correctly. What they find instead is garbled JSON strings dumped in the logs without logical structure, with 57,000 distinct entries where there should be fifty. This revelation shows that for over a year, the team has been collecting "total nonsense" in the logs.

"We only have two jobs. Get the data and log that we got the data. But the logs are nonsense, so we aren't doing the second thing, and because the logs are nonsense I don't know if we've been doing the first thing."

Rather than address this critical error, management insists on working with the erroneous logs to maintain "velocity," a term often implying efficiency but, in this case, prioritizing speed over accuracy. The author describes the frustration of being told to parse nonsensical data instead of fixing the core issues—a situation summarized by the team motto: "Stop asking questions, you're only going to hurt yourself."

The cultural disconnect deepens as the author tries to work with data from Twitter, only to find that log events lack an event ID. A supposed expert suggests using a column with ambiguous file path strings, each lacking logical identifiers, requiring complex regular expressions to infer events.

"I am expected to use regular expressions to construct a key in my query."

In yet another disheartening revelation, the author learns that the Validated: True log entries are merely hardcoded placeholders, not actual validation statuses. The logs fail to capture real system states, effectively undermining auditability.

By the end, the author reaches a breaking point, realizing their values diverge sharply from those of the organization. This disconnect prompts them to resign, choosing to invest their time in personal projects and consulting instead. In a closing reflection, they criticize the industry for investing in trendy tools like Snowflake and Databricks, without hiring engineers who understand how to design simple, effective systems.

"I could build something superior to this with an ancient laptop, an internet connection, and spreadsheets. It would take me a month tops."

This piece is a critique of both overly complex architectures and a corporate culture that prioritizes speed over quality. It highlights the importance of valuing craftsmanship and straightforward design in building sustainable and efficient data systems.

2024-11-24 SciFi book: Manna – Table of Contents | MarshallBrain.com { marshallbrain.com } (RIP Marshall)

With half of the jobs eliminated by robots, what happens to all the people who are out of work? The book Manna explores the possibilities and shows two contrasting outcomes, one filled with great hope and the other filled with misery.

Join Marshall Brain, founder of HowStuffWorks.com, for a skillful step-by-step walk through of the robotic transition, the collapse of the human job market that results, and a surprising look at humanity’s future in a post-robotic world.

Then consider our options. Which vision of the future will society choose to follow?

image-20241124100143433

  • 😺 The building we exited was another one of the terrafoam projects. Terrafoam was a super-low-cost building material, and all of the welfare dorms were made out of it. (Chapter 4)

Newsletters

2024-09-20 JavaScript Weekly Issue 705: September 19, 2024 { javascriptweekly.com }

2024-09-29 Digital signatures and how to avoid them { newsletter.programmingdigest.net }

2024-09-29 Implementing Blocked Floyd-Warshall algorithm { newsletter.csharpdigest.net }

2024-10-18 JavaScript Weekly Issue 709: October 17, 2024 { javascriptweekly.com }

2024-10-20 How Discord Reduced Websocket Traffic by 40% { newsletter.programmingdigest.net }

2024-10-27 A Brief Introduction to the .NET Muxer { newsletter.csharpdigest.net }

2024-10-27 That's Not an Abstraction { newsletter.programmingdigest.net }

2024-11-17 Exploring the browser rendering process { newsletter.programmingdigest.net }

2024-12-01 Legacy Shmegacy { newsletter.programmingdigest.net }

Working with People

2024-11-23 Take the Thomas-Kilmann Instrument | Improve How You Resolve Conflict {kilmanndiagnostics.com}

image-20241123122604326

Related:

In conflict situations, individuals often exhibit different behavioral strategies based on their approach to managing disagreements. Avoiding is one strategy, and here are four others, alongside avoiding, commonly identified within conflict management models like the Thomas-Kilmann Conflict Mode Instrument (TKI):

Avoiding

  • Behavior: The individual sidesteps or withdraws from the conflict, neither pursuing their own concerns nor those of the other party.
  • When it's useful: When the conflict is trivial, emotions are too high for constructive dialogue, or more time is needed to gather information.
  • Risk: Prolonging the issue may lead to unresolved tensions or escalation.

Competing

  • Behavior: The individual seeks to win the conflict by asserting their own position, often at the expense of the other party.
  • When it's useful: When quick, decisive action is needed (e.g., in emergencies) or in matters of principle.
  • Risk: Can damage relationships and lead to resentment if overused or applied inappropriately.

Accommodating

  • Behavior: The individual prioritizes the concerns of the other party over their own, often sacrificing their own needs to maintain harmony.
  • When it's useful: To preserve relationships, resolve minor issues quickly, or demonstrate goodwill.
  • Risk: May lead to feelings of frustration or being undervalued if used excessively.

Compromising

  • Behavior: Both parties make concessions to reach a mutually acceptable solution, often splitting the difference.
  • When it's useful: When a quick resolution is needed and both parties are willing to make sacrifices.
  • Risk: May result in a suboptimal solution where neither party is fully satisfied.

Collaborating

  • Behavior: The individual works with the other party to find a win-win solution that fully satisfies the needs of both.
  • When it's useful: When the issue is important to both parties and requires creative problem-solving to achieve the best outcome.
  • Risk: Requires time and effort, which may not always be feasible in time-sensitive situations.

Each of these strategies has its strengths and limitations, and the choice of approach often depends on the context of the conflict, the relationship between the parties, and the desired outcomes.

Wellbeing

2024-11-03 On Burnout, Mental Health, And Not Being Okay — Ludicity { ludic.mataroa.blog }

In this deeply personal blog post, the author reflects on the mental health struggles that many people face, sharing candid experiences with burnout and severe depression. They emphasize that everyone will have times when they are "Not Okay," and it's important to acknowledge this without shame. Through their own journey of overcoming hardship—ranging from academic pressures to toxic workplaces—they highlight the significance of seeking help, making lifestyle changes, and understanding that recovery is possible. The author encourages readers to care for themselves and others, reminding us that empathy and support can make a profound difference in navigating life's challenges.

✨ New wiki category:

2024-12-01 Psy-Burnout (mental wellbeing) { blog.zharii.com }

image-20241201143408170

Fun / Retro

2024-11-23 calculatorwords.pdf 344 Words You Can Spell On a Calculator

Compiled by Jim Bennett 2014

image-20241123191431682

ALL NUMBERS ARE HERE
EnglishNumbersEnglishNumbersEnglishNumbers
BE38BEE338BEEBE38338
BEES5338BEG638BEGS5638
BEIGE36138BELIE31738BELIES531738
BELIZE321738BELL7738BELLE37738
8ELLES537738BELLIES5317738BELLS57738
BESIEGE3631538BESIEGES53631538BESS5538
BESSEL735538BESSIE315538BIB818
BIBLE37818BIBLES537818BIBS5818
BIG618BILB00.8718BILE3718
BILGE36718BILGES536718BILL7718
BILLIE317718BILLIES5317718BILLS57718
BLESS55378BLESSES5355378BLIGH46178
BLISS55178BLISSES5355178BL0B8078
BL0BS58078B0B808B0BBI18808
B0BBIE318808B0BBIES5318808B0BBLE378808
B0BBLES5378808B0BS5808B0G608
B0GGLE376608B0GGLES5376608B0GIE31608
B0GIES531608B0GS5608B0IL7108
B0ILS57108B0ISE35108B0LE3708
B0LES53708B0LL7708B0LLS57708
BOO008BOOB8008B00BIES5318008
B00BS58008B00GIE316008B00GIES5316008
B00LE37008B00S5008B00ZE32008
B00ZES532008B0SE3508B0SH4508
B0SS5508B0SSES535508B0Z00.208
B0Z0S50208EBB883EBBS5883
EEL733EELS5733EGG663
EGGS5663EGGSHELL77345663EGGSHELLS577345663
EG00.63EG0S5063EL8E3873
ELEGIES5316373ELI173ELIGIBLE37816173
ELISE35173ELISEO0.35173ELL773
ELLIE31773ELLIS51773ELLS5773
EL0ISE351073ELSE3573ELSIE31573
ESSIE31553GEE336GEES5336
GEESE35336GEL736GELS5736
GE00.36GE0L0GIES531607036GIBBS58816
GIBE3816GIBES53816GIG616
GIGGLE376616GIGGLES5376616GIGOLO0.70616
GIGOLOS5070616GIGS5616GIL716
GILES53716GILL7716GILLS57716
GISH4516GLEE3376GLI88176
GLOB8076GLOBE38076GLOBES538076
GLOBS58076GL0SS55076GL0SSES5355076
GL0SSIES53155076G0B806G0BBLE378806
G0BBLES5378806G0BI1806G0BS5806
G0EBBELS57388306G0ES5306G0G606
G0GGLE376606G0GGLES5376606G0G0L70606
G0LLIES5317706G00GLE376006G00SE35006
G00SES535006G0S506G0SH4506
G0SHES534506HB00.84HE8E3834
HEEL7334HEELS57334HEGEL73634
HELI0S501734HELL7734HELLISH4517734
HELL00.7734HELL0S507734HELLS57734
HES534HESS5534HESSE35534
HIE314HIES5314HIGH4614
HIGHS54614HILL7714HILLEL737714
HILLS57714HIS514HISS5514
HISSES535514H0B804H0BBES538804
H0BBIES5318804H0BBLE378804H0BBLES5378804
H0BBS58804H0B00.804H0B0ES530804
H0B0S50804H0BS5804H0E304
H0ES5304H0G604H0GGISH4516604
H0GS5604H0LE3704H0LES53704
H0LLIE317704H0LLIES5317704H0LLIS517704
H0SE3504H0SES53504IBIS5181
IBISES535181IB00.81IGLOOS500761
ILL771ILLEGIBLE378163771ILLS5771
ISIS5151ISLE3751ISLES53751
LIZ217LIZZIE312217L0B807
L0BBIES5318807L08E3807L08ES53807
L08S5807L0G607L0GE3607
L0GES53607L0G00.607L0G0S50607
L0GS5607L0IS5107L0LL7707
L0LLS57707L00SE35007L00SES535007
L0SE3507L0SES53507L0SS5507
L0SSES5355070BESE353800BLIGE361780
0BLIGES53617800B0E30800B0ES53080
0BSESS5535800BSESSES535535800GLE3760
0GLES537600HI00.1400H00.40
0H0S50400HS5400IL710
0ILS57100ISE35100LE00.370
0LLIE3177000ZE320000ZES53200
0SL00.7500ZZIE31220SEE335
SEES5335SEIZE32135SEIZES532135
SELL7735SELLS57735SHE345
SHELL77345SHELLS577345SHEOL70345
SHES5345SHIES53145SHILL77145
SHILLS577145SHIL0H407145SH0E3045
SH0ES53045SH00S50045SIEGE36315
SIEGES536315SIGH4615SIGHS54615
SILL7715SILLIES5317715SILLS57715
SIL00.715SIL0S50715SIS515
SISES53515SISSIES5315515SIZE3215
SIZES53215SIZZLE372215SIZZLES5372215
SLEIGH461375SLEIGHS5461375SL0B8075
SL0BS58075SL0E3075SL0ES53075
SL0G6075SL0GS56075SL0SH45075
SL0SHES5345075S0B805S0BS5805
S0H00.405S0IL7105S0ILS57105
S0L705S0LE3705S0LES53705
S0LI1705S0LIS51705S0L00.705
S0L0S50705S0LS5705ZELIG61732
ZIB00.812Z0E302Z00S5002

2024-11-23 Rendering “modern” Winamp skins in the browser / Jordan Eldredge { jordaneldredge.com }

image-20241122173439101

2024-11-11 Pieter.com - Pieter's Official Homepage { pieter.com }

image-20241110220126116

2024-11-07 MAX SIEDENTOPF — Passport Photos { maxsiedentopf.com }

image-20241106225118042

2024-10-13 stenzek/duckstation: Fast PlayStation 1 emulator for x86-64/AArch32/AArch64/RV64 { github.com }

DuckStation is an simulator/emulator of the Sony PlayStation(TM) console, focusing on playability, speed, and long-term maintainability. The goal is to be as accurate as possible while maintaining performance suitable for low-end devices. image-20241201135052551

2024-06-18 Where Did You Go, Ms. Pac-Man? — Thrilling Tales of Old Video Games

image-20241201141626837

2024-06-27 Liquid Layers

2024-06-27 Liquid Layers | Hacker News

image-20241201141748969

2024-06-27 Science Fiction Writer Robert J. Sawyer: WordStar: A Writer's Word Processor

image-20241201141845220

2024-06-28 Advent of Code 2023 Day 19: Aplenty - YouTube

Advent of Code in Excel image-20241201142055161

2024-08-29 Web Design Museum - Discover old websites, apps and software { www.webdesignmuseum.org }

image-20241201142134615

2024-09-19 crowdwave.com { www.crowdwave.com }

Show HN: I made crowdwave – imagine Twitter/Reddit but every post is a voicemail

image-20241201142255701

2024-08-28 Monkeytype | A minimalistic, customizable typing test { monkeytype.com }

image-20240827232641786

Inspiration!

2024-12-01 Andrew Ayer in the Fediverse { follow.agwa.name }

I honestly liked the design and layout

image-20241130200134882

2024-12-01 To the Fediverse! { www.fediverse.to } image-20241130200321657

2024-12-01 Pleroma — a lightweight fediverse server { pleroma.social }

image-20241130200618867

2024-12-01 src/App.scss · develop · Pleroma / pleroma-fe · GitLab { git.pleroma.social } Some good examples for using css variables with scss image-20241130201016998

2024-11-30 GitHub - tldraw/make-real: Draw a ui and make it real {github.com}

2024-11-30 make real • tldraw {makereal.tldraw.com}

2024-11-30 GitHub - SawyerHood/draw-a-ui: Draw a mockup and generate html for it {github.com} ✨FORK SOURCE✨

image-20241130115413381 image-20241130115438468

2024-11-30 tldraw | Steve Ruiz | Substack {tldraw.substack.com}

image-20241130151615673

2024-11-27 Text Blaze: Snippets and Templates for Chrome {blaze.today}

image-20241127150049722

2024-11-26 Monocle · Access and transform immutable data { www.optics.dev }

image-20241125204603802

2024-08-28 The Monospace Web { owickstrom.github.io }

image-20241201135223177

image-20241201135322097

2024-11-24 triyanox/lla: A modern alternative to ls { github.com }

aww! ls with plugins!

image-20241124135202355

2024-11-24 I made an ls alternative for my personal use | Hacker News { news.ycombinator.com } elashri There seems to be a lot of projects that is now competing to replace ls (for people preferences)

For reference, those are the ones I am familiar with. They are somehow active in contrast to things like exa which is not maintained anymore.

eza: (https://github.com/eza-community/eza)

lsd: (https://github.com/Peltoche/lsd)

colorls: (https://github.com/athityakumar/colorls)

g: (https://github.com/Equationzhao/g)

ls++: (https://github.com/trapd00r/LS_COLORS)

logo-ls: (https://github.com/canta2899/logo-ls) - this is forked because main development stopped 4 years ago.

Any more?

Personally I prefer eza and wrote a zsh plugin that is basically aliases that matches what I have from my muscle memory.

2024-11-24 Frosted Glass from Games to the Web - tyleo.com { www.tyleo.com }

image-20241123212224379

2024-11-20 WebVM - Linux virtualization in WebAssembly { webvm.io }

+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~+
| |
| WebVM is a virtual Linux environment running in the browser via WebAssembly |
| |
| WebVM is powered by the CheerpX virtualization engine, which enables safe, |
| sandboxed client-side execution of x86 binaries, fully client-side |
| |
| CheerpX includes an x86-to-WebAssembly JIT compiler, a virtual block-based |
| file system, and a Linux syscall emulator |
| |
| [News] WebVM 2.0: A complete Linux Desktop Environment in the browser: |
| |
| https://labs.leaningtech.com/blog/webvm-20 |
| |
| Try out the new Alpine / Xorg / i3 WebVM: https://webvm.io/alpine.html |
| |
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~+

2024-11-08 Home: Mushroom Color Atlas { www.mushroomcoloratlas.com }

image-20241107222737497

2024-11-07 Your Hacker News { yourhackernews.com }

image-20241106225716990

2024-11-07 Aesop's Fables Interactive Book | Read.gov - Library of Congress { read.gov }

image-20241106224639993

image-20241106224700805

2024-11-07 McMaster-Carr { www.mcmaster.com }

McMaster-Carr’s website, www.mcmaster.com, is renowned for its speed, achieved through minimalist design, server-side rendering, and strategic use of technology like ASP.NET and JavaScript libraries. Prefetching techniques preload pages as users hover, ensuring near-instant navigation, while CDNs cache content globally to reduce latency. This streamlined, user-focused approach lets customers quickly access and order from McMaster-Carr’s extensive catalog, making it a leader in industrial supply and a favorite for its seamless, efficient experience.

image-20241106224136854

2024-10-05 Methods of Mandarin { isaak.net }

I got pretty good in Mandarin within 12 months of rigorous part-time study. I'm not even close to perfectly fluent, but I got far into intermediate fluency. Read my personal story of learning Mandarin here: isaak.net/mandarin

This post on my Methods of Mandarin (MoM) is for fellow language learners and autodidacts. This isn't a thorough how-to guide. I won't be holding your hand. It's more like a personal notebook of what worked for me. I'm sharing my personal Anki deck and then I'll describe all my methods and tips. People's styles and methods differ.

2024-08-29 sjpiper145/MakerSkillTree: A repository of Maker Skill Trees and templates to make your own. { github.com }

image-20241201134630051

2024-09-18 Dune Shell { adam-mcdaniel.github.io }

image-20241201134722315

2024-09-19 Comic Mono | comic-mono-font { dtinth.github.io }

image-20241201134806840

2024-09-20 Math4Devs: List of mathematical symbols with their JavaScript equivalent. { math4devs.com }

image-20241201134847457

· 15 min read

⌚ Nice watch!

2024-11-24 Keynote: The Aging Programmer - Kate Gregory - YouTube { www.youtube.com }

image-20241123231812184

Maintain vision health by getting regular eye check-ups, using appropriate glasses, and addressing night driving challenges with clean windshields and adaptive lighting.

Build physical strength and stamina by incorporating strength training (e.g., push-ups, squats) and aerobic activities like walking or biking into daily life.

Reduce pain and joint issues with anti-inflammatories like naproxen as needed and by focusing on flexibility and range-of-motion exercises.

Protect hearing through regular hearing tests starting at age 50, using hearing aids if necessary, and avoiding loud environments or overly high headphone volumes.

Improve nutrition by prioritizing fruits, vegetables, and whole foods while limiting ultra-processed items. Eat meals made with care and hydrate appropriately.

Enhance sleep quality by focusing on creating a comfortable sleep environment (“sleep joy”) and getting the amount of rest your body needs without guilt.

Safeguard brain health using organizational strategies, pursuing lifelong learning, and embracing new tools and technologies to stay sharp.

Foster emotional resilience by prioritizing gratitude and optimism, avoiding unnecessary negativity, and working toward a calm and joyful outlook.

Adapt to changes in ability by recognizing limitations as they arise and addressing them proactively with tools, technology, and support systems.

Combat workplace biases against older programmers by emphasizing your experience, exploring consulting or freelancing, and pushing back against assumptions about learning capacity.

Plan for retirement by calculating your financial “number,” balancing saving with enjoying the present, and planning meaningful activities to avoid boredom and isolation.

Improve work-life balance through flexible work arrangements, prioritizing health, and focusing on work that aligns with your values and passions.

Build relationships by maintaining friendships across generations and engaging with new communities through hobbies, volunteering, or neighborhood activities.

Prevent loneliness by cultivating social engagement in retirement through structured activities, regular interactions, or volunteering.

Develop healthy habits by avoiding smoking, using sunscreen, and embracing preventive measures like vaccinations.

Incorporate joy and play into daily life through hobbies, nature, and small pleasures, focusing on activities that spark happiness and relaxation.

Create a lasting legacy by organizing and preserving personal and professional projects, ensuring they are meaningful and accessible for others.

Handle loss and change by accepting the inevitability of loss while actively seeking new experiences and connections to balance those losses.

Address unexpected challenges by consulting professionals for new or worsening health issues, as not all problems stem from aging.

Reflect on life purpose and make choices that align with long-term happiness and fulfillment.

Exercise regularly to support both physical and mental well-being.

Save for the future while enjoying life in the present.

Stay socially engaged through hobbies, work, or volunteering.

Eat a balanced diet and focus on whole foods for overall health.

Adapt to limitations by embracing tools and strategies that maintain independence.

Build friendships across generations for mutual support and enrichment.

Cultivate a sense of purpose through meaningful work or activities.

Kate Gregory’s message emphasizes that aging well—whether as a programmer or in any field—requires proactive effort, adaptability, and a focus on joy and purpose.

2024-11-23 My Own Nightmare HR Manager Story (Tip: Every Company Has An A-Hole) - YouTube { www.youtube.com }

image-20241123000830712

In every workplace, you’ll encounter a corporate jerk—the kind of person who thrives on creating chaos, manipulating others, and throwing people under the bus. These individuals are frustrating, but they don’t have to define your career. Let me share a condensed version of my experience dealing with one and the key strategies I used to handle it.

I took on a senior recruiter role with an RPO organization, filling high-level positions nationwide. Before my official role started, I was asked to temporarily support a chaotic plant with high turnover. From the start, the HR manager at the plant undermined my work, deviated from processes, and made false accusations to my boss about my performance. Despite the challenges, I stayed professional and focused on achieving results.

Later, when assigned to the same plant for senior-level roles, the HR manager again tried to sabotage me. This time, I was ready. Armed with detailed documentation of every interaction, I exposed her dishonesty, which damaged her credibility. Though the plant's issues persisted, I didn’t let her behavior derail me. Shortly after, I moved on to a better opportunity, taking invaluable lessons with me.

Lessons Learned

  1. Document Everything: Keep detailed records of all interactions and deliverables. These become your safety net against false accusations.
  2. Maintain Professionalism: Stay composed and formal in your interactions. Don’t stoop to their level.
  3. Set Boundaries: Be clear about your role and responsibilities. Don’t let others exploit your flexibility.
  4. Don’t Internalize Their Behavior: Their actions are a reflection of their own issues, not your worth or abilities.

Corporate jerks are an unavoidable reality in most workplaces, but they don’t have to define your career. Use strategy, stay professional, and remember: you’re in control of your trajectory—not them. When necessary, don’t hesitate to move on to an environment where you can thrive.

2024-11-23 JavaScript in places you didn’t expect - YouTube { www.youtube.com }

image-20241122165048524

JavaScript is everywhere—from browsers to unexpected platforms like game consoles and operating systems. Despite its quirks and criticisms, its versatility has made it indispensable. This post is for developers and tech enthusiasts curious about how JavaScript extends beyond typical web applications, influencing industries like gaming, desktop environments, and more.

JavaScript Beyond Browsers JavaScript is not just a browser language anymore. From GNOME’s desktop environment in Linux, which is almost 50% JavaScript, to Windows 11’s React Native-powered start menu and recommended sections, it’s embedded in operating systems. Even the PlayStation 5 relies heavily on React Native for its interface.

JavaScript in Gaming Consoles Microsoft’s Xbox and Sony’s PlayStation both integrate React Native into their systems. Historically, web technologies like HTML were also used (e.g., Nintendo Wii’s settings menu), showing a longstanding trend of leveraging web tech for ease of development in consoles.

Gaming and UI Layers Even major game titles like Battlefield 1 use JavaScript and React for their UI layers, thanks to tools like MobX for state management. Developers appreciate its flexibility in managing complex UI interactions over building bespoke solutions.

Game Development: JavaScript vs. C++ Vampire Survivors showcases a fascinating dual approach: its browser-based JavaScript version serves as the prototype, while a team ports it to C++ for consoles. This method ensures performance optimization without sacrificing the rapid development benefits of JS.

React’s Evolution and Adaptation React Lua, originally a Roblox project, brings React’s paradigms to Lua-based environments. This shows how React’s influence transcends JavaScript, becoming a staple for creating UIs even in non-JS ecosystems.

Why JavaScript? JavaScript enables faster iteration, broader developer accessibility, and reduced specialization needs. Whether it’s GNOME choosing it for extensibility or game studios adopting React for UI efficiency, its ubiquity stems from practical needs.

2024-11-18 The Most Important API Design Guideline - No, It's Not That One - Jody Hagins - C++Now 2024 - YouTube { www.youtube.com }

This talk is fun, but more like theoretical and philosophical.

image-20241117212533364 image-20241117223655034


📝 Property-Based Testing for Joining an Array to a String with Delimiter in C++

Definition
Property-based testing involves specifying general properties a function should satisfy for a wide range of inputs. In this example, we will test a function that joins an array of strings with a delimiter into a single string. The properties we want to validate are:

  1. The delimiter should only appear between elements, not at the start or end.
  2. If the array has one element, the result should be the element itself without the delimiter.
  3. An empty array should produce an empty string.

C++ Code Example using rapidcheck

Here’s a property-based test using the rapidcheck library in C++ to test a join function that joins a vector of strings with a specified delimiter:

#include <rapidcheck.h>
#include <string>
#include <vector>
#include <sstream>
#include <iostream>

// Function to join array with a delimiter
std::string join(const std::vector<std::string>& elements, const std::string& delimiter) {
std::ostringstream os;
for (size_t i = 0; i < elements.size(); ++i) {
os << elements[i];
if (i != elements.size() - 1) { // Avoid trailing delimiter
os << delimiter;
}
}
return os.str();
}

int main() {
rc::check("Joining should produce a correctly delimited string", [](const std::vector<std::string>& elements, const std::string& delimiter) {
std::string result = join(elements, delimiter);

// Property 1: The delimiter should appear only between elements
if (elements.size() > 1 && !delimiter.empty()) {
// Split result by delimiter and check the components match the input
std::vector<std::string> parts;
std::string::size_type start = 0, end;
while ((end = result.find(delimiter, start)) != std::string::npos) {
parts.push_back(result.substr(start, end - start));
start = end + delimiter.length();
}
parts.push_back(result.substr(start));

// Assert parts match elements
RC_ASSERT(parts == elements);
}

// Property 2: If there's only one element, the result should match that element directly
if (elements.size() == 1) {
RC_ASSERT(result == elements[0]);
}

// Property 3: If the array is empty, the result should be an empty string
if (elements.empty()) {
RC_ASSERT(result.empty());
}
});

return 0;
}

Notice that rc::check will run the test 100 times with different random input parameters. In case of failure, it will give the configuration and random seed info for debugging in output.

Explanation of Example

  1. Property 1: Ensures that if multiple elements are joined with a delimiter, the delimiter only appears between elements, not at the start or end.
  2. Property 2: Checks that if the array has only one element, the function returns the element itself without any delimiter.
  3. Property 3: Confirms that if the input array is empty, the output string is empty.

This approach guarantees that the join function works as expected across diverse inputs, making it more robust against edge cases such as empty arrays, single-element arrays, and unusual delimiter values.

Links

2024-11-18 emil-e/rapidcheck: QuickCheck clone for C++ with the goal of being simple to use with as little boilerplate as possible. { github.com }


2024-11-19 Stop Solving Problems for Your Development Team! - YouTube { www.youtube.com }

image-20241118223517019

image-20241118223553330


For technical leaders, the balance between leading effectively and empowering their team can be challenging. Whether you’re a software engineer managing junior developers or a product owner guiding associates, the traditional approach of “just give the answer” can lead to dependency and frustration for both you and your team. This post explores the value of coaching-driven leadership—a method that empowers your team to become self-sufficient, creative problem-solvers. If you’re in any technical or managerial role, understanding how to guide without micromanaging is essential. Learn how adopting a coaching approach can transform your team’s efficiency, autonomy, and collaboration.

The Shift from Solving Problems to Empowering People

A coaching-based leadership style redefines how leaders approach problem-solving with their teams. Instead of quickly providing answers to move tasks along, this approach encourages team members to develop the skills to tackle issues independently, ultimately creating a more resilient and capable workforce. Below are some key insights and advice on how to lead through empowerment:

Encouraging Self-Reliance Instead of Dependency

  • Why It Matters: When leaders constantly solve problems for others, it builds dependency. Empowering team members to find their own solutions helps reduce your stress and increases their confidence.
  • How to Do It: Encourage team members to exhaust all possible resources and approaches before coming to you. Ask questions like, “How would you solve this if I weren’t available?” This encourages them to think independently.

Asking Powerful, Resourceful Questions

  • Why It Matters: A quick solution often leads to repeated questions. When leaders ask resourceful questions, they prompt team members to analyze and solve problems on their own.
  • How to Do It: Instead of offering solutions, ask questions that challenge their thought processes. Examples include:
    • “What other approaches have you considered❓️”
    • “Can this problem be broken down into smaller tasks❓️”

This approach builds critical thinking and problem-solving skills.

Fostering a Growth-Oriented Mindset

  • Why It Matters: Viewing team members as capable individuals with potential is essential. By recognizing and nurturing their strengths, leaders can help people grow into their roles more effectively.
  • How to Do It: Reframe your thinking to see team members as resourceful and capable. Focus on their potential and ask questions that encourage them to broaden their perspectives, such as, “What new solutions might you try if you had more resources?”

Prioritizing Long-Term Gains Over Short-Term Fixes

  • Why It Matters: Quick answers may solve today’s problem, but they build future dependency. Investing in a coaching style fosters autonomy, saving time and stress in the long run.
  • How to Do It: Resist the urge to provide immediate solutions. Instead, encourage team members to analyze challenges thoroughly, which leads to more sustainable growth and resilience.

Practical Applications of Coaching in Technical Leadership

For leaders looking to implement these coaching principles, here are specific areas where a coaching mindset can be applied effectively:

  • Code Reviews: Instead of dictating how code should look, ask questions about their logic and problem-solving approach. This not only ensures quality but also deepens their understanding.
  • Design and Project Reviews: Use design critiques as opportunities to help team members articulate their design choices, fostering a culture of open dialogue and improvement.
  • Debugging and Troubleshooting: When assisting with debugging, ask team members to consider alternative solutions or explain their thought process rather than simply fixing the problem.
  • Project Planning: Encourage team members to independently explore solutions to potential obstacles by asking them to consider all options and resources available.

2024-11-24 How regexes got catastrophic - YouTube { www.youtube.com }

image-20241124120931384

Introduction

Regular expressions (regexes) are a foundational tool in programming, celebrated for their ability to match patterns efficiently and elegantly. However, their widespread use has exposed critical flaws in how they are implemented in most programming environments. What begins as a theoretical marvel often translates into real-world inefficiencies and vulnerabilities, leading to catastrophic outcomes like server crashes from regex denial of service (ReDoS) attacks.

This post unpacks the evolution of regex algorithms, contrasts their efficiency, and explores how poor implementation choices have led to systemic issues. Whether you're a systems programmer, web developer, or curious about computational theory, understanding regex's hidden complexities will change how you approach pattern matching.

1. The Two Faces of Regex Algorithms

Regex engines typically rely on two main algorithms: the lockstep algorithm (also known as Thompson's algorithm) and backtracking. Here's how they stack up:

  • Lockstep Algorithm: This algorithm operates with predictable performance, scaling quadratically in worst-case scenarios and linearly when scaling only input size. It treats all possible paths through a regex simultaneously, avoiding exponential blowups.
  • Backtracking Algorithm: While intuitive and flexible (especially for complex features like backreferences and capturing groups), backtracking scales exponentially in the worst case. This flaw enables catastrophic backtracking, where a regex takes impractically long to resolve, even on short inputs.

2. Exponential Backtracking in Practice

Using backtracking means every possible path through a regex is explored individually. When paths multiply exponentially—such as in nested structures or poorly constructed patterns—the execution time balloons. For instance:

  • A regex engine using backtracking may take 24 ticks to match a complex string, compared to only 18 ticks with the lockstep algorithm.

3. Historical Decisions with Long-Lasting Impacts

The dominance of backtracking stems from historical choices made during the development of early Unix utilities:

  • Ken Thompson, the creator of regexes, implemented a lockstep-based engine in the 1960s. However, later tools like ed and grep shifted to backtracking, prioritizing simplicity and flexibility over performance.

This decision, compounded by the introduction of features like backreferences and greedy quantifiers, locked most regex engines into backtracking implementations. Over time, these became embedded in standard libraries across programming languages, making lockstep a rarity.

4. Regex Denial of Service (ReDoS)

The vulnerability of backtracking manifests starkly in ReDoS attacks:

  • A specially crafted regex input can force an engine to explore every possible path, consuming excessive CPU cycles and halting services.
  • Examples include outages at Stack Exchange (2016) and Cloudflare (2019) due to poorly constructed regexes handling unexpected inputs.

5. Features That Complicate Performance

While features like capturing groups, backreferences, and non-greedy modifiers add functionality, they exacerbate backtracking's inefficiencies. For instance:

  • Capturing groups in backtracking engines are straightforward but introduce state-tracking complexities in lockstep implementations.
  • Backreferences break the theoretical constraints of regular languages, making efficient lockstep implementations infeasible.

6. Modern Solutions

Some modern regex engines, like Google's RE2, abandon backtracking altogether, focusing on performance and predictability. RE2 enforces strict adherence to regular language constraints, ensuring linear or quadratic time complexity.

While sacrificing backreferences and some advanced features, engines like RE2 are critical for applications requiring robust and reliable performance, such as large-scale web services.

· 40 min read

⌚ Nice watch!

In this blog post, I'll be sharing a collection of videos with concise content digests. These summaries extract the key points, focusing on the problem discussed, its root cause, and the solution or advice offered. I find this approach helpful because it allows me to retain the core information long after watching the video. This section will serve as a dedicated space for these "good watches," presenting only the most valuable videos and their takeaways in one place.

2024-10-16 Can Chinese Speakers Read Japanese? - YouTube { www.youtube.com }

image-20241016003254921

image-20241016003344065

2024-11-03 Keynote: Learning Is Teaching Is Sharing: Building a Great Software Development Team - Björn Fahller - YouTube { www.youtube.com }

image-20241102181531241

We attended a talk by Björn Fahller at ACCU 2024, focusing on how learning, teaching, and sharing are interdependent and critical to team success and personal growth. Below are key steps and ideas that were covered, with some outcomes noted and a few clarifications where needed.

1. Emphasizing Open Sharing for Safety and Improvement (13:52-14:36): Fahller shared an anecdote from 1968 about Swedish military aviation, highlighting the importance of allowing team members to communicate openly, especially about mistakes or difficulties, without fear of punishment. This approach encourages honesty and helps prevent repeated mistakes.

"Military aviation is dangerous... let them openly, and without risk for punishment, share the problems they face while flying."

Outcome: Building a safe environment for sharing leads to a culture where team members can discuss failures without fear, helping the team learn from each experience and improve.

🤖 GPT: Fahller’s translation suggests he views open communication as essential to growth and trust in teams, especially in high-stakes fields.

2. Encouraging Question-Asking and Knowledge Sharing (20:00): In discussing "Sharing is Caring," Fahller emphasized the need for team members to bring up issues or observations that might seem trivial to ensure continuous improvement. He gave examples from aviation, such as pointing out gusts of wind affecting landing, to show how small insights can contribute to collective knowledge.

Outcome: Actively sharing observations improves understanding and may reveal underlying problems that would otherwise go unnoticed. Open communication is key to refining processes.

🤖 GPT: Fahller’s examples reinforce the idea that even seemingly minor details should be voiced -- they may be crucial in the big picture.

3. Addressing Information Overload in Teams (37:52): New team members often feel overwhelmed by the volume of information shared by experienced team members. Fahller suggested that newcomers should ask experienced members to slow down, provide context, and "paint the scene" so they can understand the background of the tasks.

"Ask them to paint the scene. What are they trying to achieve? What is it that is not working?"

Outcome: When we take the time to explain context to newcomers, it helps bridge knowledge gaps and allows everyone to contribute effectively.

🤖 GPT: This approach builds understanding but also patience and humility in experienced team members by reminding them to make knowledge accessible.

4. Creating a Positive Review Culture (33:47): In discussing code reviews, Fahller contrasted two styles: dismissive comments (e.g., "I don’t understand. Rewrite!") vs. constructive feedback (e.g., "Can you explain why you chose to do it this way?"). He emphasized that reviews should be treated as educational opportunities rather than judgment sessions.

Outcome: Constructive reviews foster a growth-oriented environment and allow both the reviewer and reviewee to learn. Constructive feedback motivates improvement, while dismissive comments discourage engagement.

🤖 GPT: A consistent, constructive review culture also promotes long-term trust and makes code quality a shared team responsibility.

5. Handling Toxicity in the Workplace (55:45):

In this segment, Björn Fahller tackled the issue of toxicity within teams and its corrosive effects on collaboration, morale, and individual well-being. He addressed specific toxic behaviors that often crop up in workplaces, describing them not as isolated incidents but as patterns that can erode trust and productivity if left unchecked. Fahller’s examples of toxic behavior included:

  • "The weekly dunce hat" – Singling out someone each week as a scapegoat or object of ridicule, effectively creating an atmosphere of shame and fear.
  • Blame-seeking – Looking for someone to hold responsible for problems, rather than investigating issues constructively or as a team.
  • Threats, pressure, fear, and bullying – Using intimidation tactics to push individuals into compliance, often stifling creativity, openness, and morale.
  • Ghosting – Ignoring someone’s contributions or input entirely, which Fahller noted can make people feel alienated and undervalued.
  • Stealing credit – Taking recognition for someone else’s work, which not only demoralizes the actual contributor but also creates a culture of mistrust.

Fahller stressed that these behaviors are not only demoralizing but actively prevent individuals from sharing ideas and asking questions openly. Such an environment can force people into silence and self-protection, hindering the team’s ability to learn from mistakes and innovate. He emphasized that the first step in combating toxicity is recognition—understanding and identifying toxic patterns when they appear.

"If you're not respected at work," Fahller advised, the first course of action is to try to find an ally. An ally can provide a supportive voice and help validate one's experiences, which can be especially important if toxic behavior is widespread or normalized within the team. An ally may be able to speak up on your behalf, lend credibility to your concerns, and offer support when you’re confronting challenging dynamics. This shared voice can help to bring attention to the toxicity and, ideally, drive change.

However, Fahller acknowledged that finding an ally may not always be enough. If a toxic environment persists despite attempts to address it, he advised a more decisive response: leaving. He argued that individuals should not allow themselves to be "ignored, threatened or made fun of," as staying in such an environment can be mentally and emotionally draining, ultimately leading to burnout and disengagement.

"If all else fails, go elsewhere. Don’t allow yourself to be ignored, threatened or made fun of."

This recommendation underscores Fahller's stance that no one should feel compelled to remain in an unchangeable toxic environment. He suggested that people value their self-respect and mental health over job stability if the work culture is irredeemably harmful.

Fahller’s advice reflected a pragmatic approach to toxicity: address it internally if possible, but recognize when to prioritize personal well-being over enduring a dysfunctional work environment. While leaving a job is often a difficult decision, Fahller's message was clear -- don’t compromise on respect and support. A healthy team environment where people feel safe and valued is essential not just for individual satisfaction but also for collective success.

2024-11-03 Nikhil Suresh - Skills that programmers need, to defend both their code and their careers - YouTube { www.youtube.com }

image-20241102200908952

In his talk, Nikhil Suresh, the director of Hermit Tech, explores the challenges that software engineers face in the corporate world. He begins with an old animal fable about a scorpion and a frog to illustrate the dynamics between programmers and businesses.

"The scorpion wants to ship a web application but cannot program, so it finds a frog because frogs are incredible programmers."

The scorpion assures the frog that it won't interfere with his work. However, after some time, the scorpion hires an agile consultant and imposes new restrictions, disrupting the frog's workflow. This story mirrors how businesses often unknowingly hinder their own developers.

Nikhil emphasizes that most companies don't know much about software, making it difficult for programmers to clearly indicate their value. He refers to Sturgeon's Law, which states that "90% of everything is bad," highlighting the prevalence of low standards in the industry.

He shares personal experiences where previous engineers lacked basic competence, such as not setting primary keys in databases or causing exorbitant costs due to misconfigured systems. These anecdotes illustrate that businesses cannot tell the difference between good and bad programmers, leading to competent developers being undervalued.

Introducing the concepts of profit centers and cost centers, Nikhil explains that IT departments are often seen as cost centers, affecting how programmers are treated within organizations. He points out that being better at programming isn't always highly valued by companies because they may not see a direct link between technical skill and profit.

To navigate these challenges, Nikhil advises developers to never call themselves programmers. He argues that the term doesn't convey meaningful information and can lead to misconceptions.

"If you tell someone who doesn't program that you're a programmer, their first thought is like, 'Ah, one of those expensive nerds.'"

He recommends reading Patrick McKenzie's article "Don't Call Yourself a Programmer, and Other Career Advice", which offers insights into presenting oneself more effectively in the professional sphere.

Nikhil encourages developers to write about their experiences and share them online. By doing so, they can showcase their unique ideas and differentiate themselves in the field. He believes that your unique ideas are what differentiate you from others and that sharing them helps in building a personal brand.

He also suggests that programmers should read outside of IT and delve into the humanities. This broadens their perspectives and provides valuable analogies for complex ideas. Nikhil shares how his involvement in improvised theater and reading "Impro: Improvisation and the Theatre" by Keith Johnstone helped him understand status dynamics in professional interactions.

Understanding these dynamics allows developers to navigate job interviews and workplace relationships more effectively. Nikhil emphasizes the importance of taking control of your career and making decisions that enhance your value to both yourself and society.

In conclusion, Nikhil urges developers to recognize that technical skill isn't the main barrier to having a better career. Factors like communication, strategic thinking, and understanding corporate dynamics play crucial roles. By focusing on these areas, developers can transform their passion into something that has greater value for both themselves and the broader community.

2024-11-02 Get old, go slow, write code! - Tobias Modig - NDC Oslo 2024 - YouTube { www.youtube.com }

image-20241116004657646

📝 Sustainable Software Development Careers: Aging, Quality, and Longevity in Tech

Introduction
In the fast-evolving world of software development, many professionals feel the pressure to stay young, move fast, and keep up with new trends. But does speed really equal success in this field? This post is for experienced developers, tech managers, and anyone considering a long-term career in software. We'll explore why sustainability in development—focusing on quality, experience, and career longevity—matters and how you can embrace aging as an asset, not a setback.

Why You Should Care
The tech industry often promotes rapid career progression and cutting-edge skills over stability and endurance. However, valuing experience, avoiding burnout, and emphasizing quality over speed are essential for creating durable, impactful software and ensuring personal career satisfaction.

Embracing Aging as a Developer

Many developers worry about becoming irrelevant as they age, yet experience can be a strength. Research shows the average age of developers is among the lowest across professional fields, meaning many leave the field early. However, experience contributes to problem-solving, architectural insights, and higher quality standards. Older developers often provide unique perspectives that younger professionals may lack, particularly in maintaining and improving code quality.

Slowing Down for Quality

Too many developers face intense pressure to deliver quickly, often sacrificing quality. This results in technical debt and rushed code that becomes difficult to maintain. The speaker argues that development is a marathon, not a sprint. Slowing down and building sustainable software creates long-term benefits, even if it appears slower at first. By prioritizing thoughtful coding and taking the time to address technical debt, developers can create resilient, maintainable systems.

Challenges with Traditional Career Progression

Many companies push experienced developers into management roles, which can leave skilled coders dissatisfied and underutilized. Known as the Peter Principle, this approach often results in skilled developers becoming ineffective managers. For those passionate about coding, staying in development roles—rather than climbing the corporate ladder—can offer fulfillment, especially if companies recognize and reward this choice.

Common Reasons Developers Leave the Field

Major reasons include burnout, shifting to roles with higher prestige, and losing the spark for coding. Additionally, aging can lead to insecurities about keeping up. To combat these trends, developers should prioritize work-life balance, take time to learn, and avoid the mindset that career progression has to mean management.

Practical Ways to Build a Sustainable Career

  • Commit to Continuous Learning: Attend conferences, read, and experiment with code to stay current.
  • Focus on Quality over Speed: Embrace practices like regular code reviews, refactoring, and retrospectives to build robust systems.
  • Build Team Trust and Psychological Safety: A supportive environment enhances productivity, allowing team members to grow together.
  • Incorporate Slack Time: Give yourself unstructured time to think, learn, and work creatively, helping avoid burnout and stagnation.

Let Experience Be Your Advantage

Staying relevant as a developer means focusing on the quality of your contributions, leveraging your experience to guide teams, and advocating for sustainable practices that benefit the entire organization. By valuing experience, resisting the rush, and maintaining passion, you can contribute meaningfully to tech at any age.

Quotes

"Getting old in software development is not a liability—it's an asset. Make those gray hairs your biggest advantage and let your experience shine through in quality code."

"Software development is not a sprint; it's a marathon. We need to slow down, find a sustainable pace, and stop rushing to deliver at the expense of quality."

"Don't let your career be dictated by the Peter Principle—just because you're a great developer doesn’t mean you’ll enjoy management. Stay with your passion if it’s coding."

"Poor quality code isn’t just a short-term fix; it’s a long-term burden. Building things right the first time is the fastest way to long-term success."

"There’s no need to be Usain Bolt in development; be more like a marathon runner. Set a steady, sustainable pace, focus on quality, and enjoy the journey."

2024-10-29 The Evolution of Functional Programming in C++ - Abel Sen - ACCU 2024 - YouTube { www.youtube.com }

image-20241116004141231

2024-11-04 Functional C++ - Gašper Ažman - C++Now 2024 - YouTube { www.youtube.com }

image-20241103203715424

This is procedural version of code that is ugly and has to be modernized with functional programming.

// procedural example
auto is_hostname_in_args(int, char const* const*) -> bool;
auto get_hostname_from_args(int, char const* const*) -> char const*;

auto get_hostname(int argc, char const* const* argv, std::string default_hostname) -> std::string {
// Split query / getter
if (is_hostname_in_args(argc, argv)) {
// Perhaps... might use optional here too?
return get_hostname_from_args(argc, argv);
}

// Ad-hoc Maybe
if (char const* maybe_host = getenv("SERVICE_HOSTNAME");
maybe_host != nullptr && *maybe_host != '\0') {
return maybe_host;
}

return default_hostname;
}

Unfortunately, I cannot provide the function version, because I don't understand it.

2024-11-07 Reintroduction to Generic Programming for C++ Engineers - Nick DeMarco - C++Now 2024 - YouTube { www.youtube.com }

image-20241107003250333

🔥🔥🔥2024-11-06 LEADERSHIP LAB: The Craft of Writing Effectively - YouTube { www.youtube.com }🔥🔥🔥

image-20241105224810433

found in 2024-11-06 Blog Writing for Developers { rmoff.net }

Introduction Writing isn’t just about sharing information; it’s about making an impact. In this insightful lecture, a distinguished writing instructor from the University of Chicago's Writing Program emphasizes that effective writing requires understanding your audience, establishing relevance, and creating a compelling narrative. This article captures the speaker’s key advice on improving writing by focusing on purpose, value, and the reader's needs.


  1. Focus on Value, Not Originality
  • Advice: The speaker challenges the idea that writing must always present something "new" or "original." Instead, writers should prioritize creating valuable content that resonates with their audience.
  • Application: Rather than striving for originality alone, focus on producing content that addresses the reader’s concerns or questions. A piece of writing is valuable if it enriches the reader’s understanding or helps solve a problem they care about.
  1. Define the Problem Clearly
  • Advice: To make a piece of writing compelling, start by establishing a problem that is relevant to your audience. A well-defined problem creates a sense of instability or inconsistency, which engages readers and positions the writer as a problem-solver.
  • Application: Use contrasting language to highlight instability—words like "but," "however," and "although" signal unresolved issues. This approach shifts the reader’s focus to the problem at hand, making them more receptive to the writer's proposed solution.
  1. Understand and Address Your Reader’s Needs
  • Advice: A writer’s task is to understand the specific needs and concerns of their reading community. This involves identifying problems that resonate with them and framing your thesis or solution in a way that is relevant to their lives or work.
  • Application: In academic and professional settings, locate problems in real-world contexts. Rather than presenting background information, articulate a challenge or inconsistency that is specific to the reader’s field or interests, making your argument compelling and directly relevant.
  1. Use the Language of Costs and Benefits
  • Advice: Writers should make it clear how the identified problem affects the reader directly. Frame issues in terms of "costs" and "benefits" to emphasize why addressing the problem is essential.
  • Application: Highlight the impact of ignoring the problem versus the benefits of solving it. This approach reinforces the relevance of your writing by aligning it with the reader’s motivations and concerns.
  1. Beware of the "Gap" Approach
  • Advice: Avoid using the concept of a "knowledge gap" as the sole justification for writing on a topic. While identifying gaps in research can work, it often lacks the urgency or impact required to engage readers fully.
  • Application: Rather than just pointing out missing information, emphasize the practical implications of filling that gap. Explain how the lack of certain knowledge creates instability or inconsistency in the field, making the need for your insights more compelling.
  1. Adopt a Community-Centric Perspective
  • Advice: Tailor your writing to the specific communities who will read it. Different communities (e.g., narrative historians vs. sociologists) have distinct approaches to problems and value different types of arguments.
  • Application: Define and understand the community of readers your work is meant to serve. Address their concerns directly and frame your argument in terms that align with their unique perspectives and values.
  1. Learn from Published Articles
  • Advice: Published work often contains subtle rhetorical cues about what resonates with readers in a specific field. Study these articles to understand the language, structure, and approach that successful writers use.
  • Application: Identify patterns in the language of published work within your target field. For instance, if a journal commonly uses cost-benefit language, incorporate it into your writing to align with reader expectations.
  1. Emphasize Function Over Form
  • Advice: Writing should serve a clear function beyond just following formal rules. Effective writing achieves its purpose by clearly communicating the problem and its significance to readers.
  • Application: Instead of focusing solely on rules or formalities, think about what your writing needs to accomplish for your audience. Make sure that every section and statement reinforces your overall argument and purpose.

2024-11-08 Developer Joy – How great teams get s%*t done - Sven Peters - NDC Oslo 2024 - YouTube { www.youtube.com }

image-20241107182436885

image-20241107182600363

In today’s fast-evolving tech landscape, “Developer Joy” is emerging as a crucial focus for engineering teams striving to deliver high-quality, innovative software. For those in software engineering or tech management, this concept brings a fresh perspective, shifting away from traditional productivity metrics and emphasizing a developer’s experience, satisfaction, and creativity. By focusing on Developer Joy, teams can foster an environment where developers not only perform optimally but also find deep satisfaction in their craft. This shift is more than just a trend; it’s a rethinking of how we define and sustain productivity in a complex, creative field like software development.

The Problem with Traditional Productivity Metrics

Traditional productivity measures, like lines of code or tasks completed, often fail to capture a developer's real impact. Software development, unlike factory work, requires creativity, problem-solving, and adaptability—traits that are poorly reflected in industrial-era metrics. Instead of simply measuring output, focusing on Developer Joy acknowledges the unique, non-linear nature of coding and innovation.

Developer Joy: A New Approach to Productivity

Developer Joy isn't about doing more in less time; it’s about creating an environment where developers thrive. When developers are joyful, they produce better code, collaborate more effectively, and sustain their motivation over time. Atlassian’s approach to Developer Joy incorporates several elements to support this environment:

  • High-quality Code: Developers enjoy working with well-structured, maintainable code.

  • Progressive Workflows: Fast, friction-free pipelines allow developers to take an idea from concept to deployment quickly.

  • Customer Impact: When developers know they’re making a meaningful difference for users, they feel a greater sense of pride and accomplishment.

    Tools and Processes to Foster Developer Joy

    To enable Developer Joy, teams at Atlassian have implemented practical solutions:

  • Constructive Code Reviews: By establishing a code review culture where feedback is respectful and constructive, teams can maintain high standards without discouraging or frustrating developers. Guidelines like assuming competence, offering clear reasoning, and avoiding dismissive comments make reviews both productive and uplifting.

  • Flaky Test Detection: The Confluence team developed an internal tool that identifies “flaky tests” (tests that fail intermittently) to save developers from unnecessary debugging. This tool boosts productivity by automating the detection and removal of unreliable tests.

  • The Punit Bot for Review Notifications: Timely code reviews are essential for maintaining team flow. The Punit Bot automatically notifies team members when their input is needed on pull requests, cutting down waiting times and keeping development on track.

Cross-Functional, Autonomous Teams

Teams need the freedom to work independently while staying aligned on goals. By embedding key functions within each team (like design, QA, and operations), Atlassian ensures that teams can progress without external dependencies. This “stack interchange” model allows each team to flow without bottlenecks.

Quality Assistance over Quality Assurance

Developers at Atlassian don’t rely solely on QA engineers to validate code. Instead, they partner with QA in the planning stage, gaining insights on testing best practices and writing their own test cases. This approach, called “Quality Assistance,” keeps quality embedded throughout the process and gives developers more control over the software they release.

Collaborating with Product Teams

Effective collaboration with product teams is crucial. Atlassian integrates developers into the full product lifecycle—from understanding the problem to assessing impact after release. This holistic involvement reduces miscommunication, enables rapid adjustments based on early feedback, and fosters a sense of ownership and pride in the end product.

The Developer Joy Survey: Measuring What Matters

To ensure Developer Joy remains high, Atlassian conducts regular “Developer Joy Surveys,” asking developers about their satisfaction in areas such as tool access, wait times, autonomy, and overall work satisfaction. By measuring both satisfaction and importance, teams identify and address specific challenges to ensure joy remains a central part of their development culture.


Notable Quotes and Jokes

  • “Developer Joy is about creating an environment where developers thrive, not just survive.”
  • “If you can’t measure Developer Joy, you’re probably measuring the wrong thing.”
  • “Code reviews should be about learning, not earning jerk points.”
  • “Productivity isn’t about lines of code; it’s about finding joy in the code you write.”

2024-11-09 Herding cats: lessons from 15 years of managing engineers at Microsoft - Kevin Pilch - YouTube { www.youtube.com }

image-20241109130107457

image-20241109130841936

Introduction Purpose and Relevance
This talk explores the nuances of managing software engineering teams. It’s particularly relevant for new or seasoned managers, especially those transitioning from technical roles to leadership. The speaker, Kevin Pilch, leverages his extensive experience managing engineering teams at Microsoft to provide insights into effective management strategies, challenges, and actionable advice.

Target Audience
Ideal for current and aspiring managers of software engineering teams, as well as individual contributors considering a management path.

Main Content

Coaching vs. Teaching
The emphasis here is on coaching engineers rather than simply teaching them. Coaching means asking questions that encourage team members to find solutions independently, fostering growth and engagement. By using the "ask solution" quadrant approach, managers can guide engineers toward problem-solving rather than directly offering answers, which enhances ownership and accountability.

Focus on Top Performers
Spend more time supporting top performers instead of focusing solely on underperformers. The impact of losing a high performer is significant—they are often highly sought after and can easily find other opportunities. Retaining skilled contributors by offering continuous support and new challenges is essential.

Importance of Self-Evaluation
The self-evaluation process is a valuable opportunity for engineers to reflect on their career paths, skill gaps, and accomplishments. By encouraging engineers to take ownership of self-assessments, managers promote introspection and personal growth, while also creating useful documentation for future managers and potential promotions.

Providing Clear Feedback
When giving performance feedback, it’s essential to avoid “weasel words” and sugarcoating, which soften the message and create misunderstandings. Use specific language that correlates to performance expectations—such as “lower than expected impact”—to ensure feedback is clear, actionable, and direct.

Encouraging Constructive Failure
Allow team members to experience failure on controlled projects to enhance learning and resilience. This approach lets engineers learn from mistakes without jeopardizing critical objectives. By creating “safe-to-fail” environments, managers can frame certain projects as experiments and define success metrics upfront, avoiding sunk cost fallacies and confirmation biases.

Task Assignment Using the ABC Framework
Assign tasks based on complexity relative to each team member’s skill level. Above-level tasks serve as stretch assignments to promote growth, current-level tasks reinforce skills, and below-level tasks include routine but necessary responsibilities that everyone shares. Balancing these types keeps team members challenged and engaged while ensuring essential work is completed.

Motivating Different Personality Types
The SCARF model—Status, Certainty, Autonomy, Relatedness, Fairness—can help recognize diverse motivators across the team. Managers should tailor interactions to each team member’s unique motivators, fostering a supportive environment that avoids triggering negative responses.

2024-11-12 Success On Your Own Terms - Todd Gardner - CPH DevFest 2024 - YouTube { www.youtube.com }

image-20241111212449932

Defining Success on My Own Terms: Lessons from My Journey in Tech

For over 25 years, I've navigated the ever-changing landscape of the tech industry. This journey has been filled with successes, failures, and invaluable lessons that have shaped not only my career but also my understanding of what success truly means. If you're a developer, entrepreneur, or someone contemplating your own path in tech, perhaps my experiences can offer some insights.

The Evolution of Success

My definition of success has shifted throughout my career. It began with a desire for prestige, evolved into a quest for independence, and later transformed into valuing time above all else. I've come to realize that success isn't a fixed destination but a moving target that changes as we grow.

"The definition of success for me has shifted throughout my career. It used to just mean prestige. Then it meant independence, and then it meant time, and it's probably going to change again."

Building Request Metrics

I founded Request Metrics with the goal of addressing a critical problem: web performance. Initially, we focused on client-side observability, aiming to help developers monitor their websites and applications. However, we soon discovered that web performance is a complex issue, laden with constantly changing metrics and definitions.

The Challenge of Web Performance

Developers often struggle with understanding and improving web performance. The industry's metrics seem to continually shift, making it hard to pin down what "fast" truly means. This confusion was costing businesses real money, especially as user expectations for speed grew.

"It turns out developers don't know how to make things fast, and it's a problem that got a lot more important recently because of a thing Google did called the Core Web Vitals."

Google's Core Web Vitals

The game changed when Google introduced Core Web Vitals—a set of metrics that directly impact search rankings. Suddenly, web performance wasn't just a technical concern but a business-critical issue. Companies that relied on SEO for visibility faced tangible consequences if their websites didn't meet these new standards.

"Google said, 'This is how fast you need to be,' and if you don't, you're going to lose page rank. So now this suddenly got way more... now there is a cost to do this. If you are an e-commerce store or you are a content publisher... you care a whole lot about the Core Web Vitals; you care about performance."

Pivoting to Solve Real Problems

Recognizing this shift, we pivoted Request Metrics to focus on helping businesses understand and improve their Core Web Vitals. We developed tools that provide clear, actionable insights into performance issues. By doing so, we addressed a real pain point, offering solutions that companies were willing to invest in to protect their search rankings and user experience.

"We started building a new thing that was all about the Core Web Vitals. It was like, 'This is the problem that we need to solve.' Businesses that depend on their SEO... it's not clear when they're about to lose their SEO ranking because of performance issues. So let's focus on that."

Lessons Learned

Throughout this journey, I've learned several key lessons.

Time is precious. Life is unpredictable, and opportunities can be fleeting. It's crucial to focus on what truly matters and act promptly.

"First, you don't have as much time as you think. This story can end for any one of us tomorrow... It might all be over tomorrow, so do what you think is important."

Embracing uncertainty is essential. Feeling unprepared is natural. Many successful endeavors begin without a clear roadmap. Confidence often comes from taking action and learning along the way.

"Don't worry if you don't know, if you don't feel confident in what you're doing. None of us know what we're doing when we start... They just started and figured it out as they went. You can do that too."

Building relationships is vital. Success isn't achieved in isolation. Cultivating strong relationships and working collaboratively can open doors you never knew existed.

"Remember, no matter what you do or what you want out of life, you need to build relationships with people around you. Don't isolate yourself and think you can solve it all by yourself. Those relationships... are going to pay huge dividends that you could never imagine."

Solving real problems should be a priority. Focus on creating solutions that address genuine needs. If your product solves a real problem, people are more likely to value and pay for it.

"Be sure to build products that actually solve real problems that cost people money. Otherwise, you might find yourself building something really cool that nobody is ever going to pay you for."

Adapting and evolving are necessary. Be prepared to change course. Flexibility is key to staying relevant and achieving long-term fulfillment.

"We found through this we found a problem that was costing money to real people, and this is the path that we're on right now... because now we're solving a problem for people that... it's cheaper to pay us to solve the problem than to deal with the risks."

Taking risks and shipping early can lead to growth. Don't wait for perfection. Launching early allows you to gather feedback and iterate, which is more valuable than holding back out of fear.

"If you're going to build something successful and durable... you're going to need people to help. And be sure to build products that actually solve real problems... But you won't hit them unless you ship something, and if you're not embarrassed of it, you're waiting too long. Just throw something together and get it out there and see if anybody cares."

Moving Forward

As I continue on this path, I understand that my definition of success will keep evolving. What's important is to remain true to oneself, prioritize meaningful work, and leverage relationships to create lasting impact.

2024-11-14 Windows: Under the Covers - From Hello World to Kernel Mode by a Windows Developer - YouTube { www.youtube.com }

image-20241116005812334

For programmers and tech enthusiasts, "Hello World" is a rite of passage, a first step in coding. But behind the simplicity of printing "Hello World" on the screen, there lies a deeply intricate process within the Windows operating system. This article uncovers the fascinating journey that a simple printf command in C takes, from the initial code execution to the text’s appearance on the screen, traversing multiple layers of software and hardware. If you're curious about what happens behind the scenes of an OS or want a glimpse into the hidden magic of programming, this guide is for you.

  1. Starting Point: Writing Hello World in C

    • The classic C code printf("Hello, World!"); initiates the journey. In this line, the printf function doesn't directly display text. Instead, it prepares data for output, setting off a series of calls to the OS to manage the display of the text.
  2. Processing printf: User Mode to Kernel Mode

    • The runtime library processes printf, identifying format specifiers and preparing raw text to be sent to the output. This initiates a function call, like WriteFile or WriteConsole, which interacts with Windows’ Win32 API—a vast interface linking programs to system resources.
    • Kernel32.dll: Despite its name, Kernel32.dll operates in user mode, providing system access without directly tapping into the kernel. Named for historical reasons, it bridges functions requiring OS kernel resources by keeping security intact.
  3. Transitioning with System Calls

    • System calls serve as gates from user mode (where applications operate) to kernel mode (where core OS processes run). Here, Windows uses the System Descriptor Table and system calls like int 2E to cross into kernel mode securely, ensuring only validated programs access system resources.
  4. Windows Kernel Processing with ntoskrnl.exe

    • After the system call, ntoskrnl.exe checks permissions and validates parameters to ensure secure execution. This step guarantees the program isn’t making unauthorized access attempts, which fortifies Windows against possible exploits.
  5. Console Management through csrss.exe

    • The Client Server Runtime Subsystem (csrss.exe) manages console windows in user mode. csrss updates the display buffer, which holds the text data ready for rendering. It keeps a two-dimensional array of characters, handling all aspects like color, intensity, and style to maintain the console window’s appearance.
  6. Rendering Text with Graphics Device Interface (GDI)

    • GDI takes over for text rendering within the console, providing essential drawing properties like font and color. The console then relies on the Windows Display Driver Model (WDDM), which bridges communication between software and the graphics hardware.
  7. The GPU and Frame Buffer

    • The GPU receives the data, rendering the text by processing pixel-by-pixel instructions into the frame buffer. This buffer, a region of memory storing display data, holds the image of "Hello World" that will appear on screen. The GPU then sends this image to the display via HDMI or another interface.
  8. From Monitor to Visual Cortex

    • The display presents the text through LED pixels, and from there, light travels to the viewer’s eyes. Visual processing occurs in the brain's visual cortex, ultimately registering "Hello World" in the viewer's consciousness—a culmination of hardware, software, and human biology.

Notable Quotes and Jokes from Dave Plummer:

  • "Imagine the simplest Windows program you could write...but do you know how the magic happens?"
  • "Our journey begins in userland within the heart of your C runtime library."
  • "Calling printf is like sending a messenger on a long cross-country journey from high-level code to low-level bits and back again."
  • "When 'Hello World' pops up on the screen, you’re witnessing the endpoint of a complex, coordinated process..."

2024-11-14 In Prompts We Trust - Jiaranai Keatnuxsuo - CPH DevFest 2024 - YouTube { www.youtube.com }

image-20241116010954387 For those diving into AI applications, especially prompt engineering with generative AI, understanding trust-building and prompt precision is key to leveraging AI effectively. If you’re an AI practitioner, developer, or someone interested in optimizing how language models generate outputs, this guide explores techniques to achieve trustworthy and accurate AI responses. By improving prompt engineering skills, you’ll better navigate the complexities of AI interactions and make your AI applications more reliable, relevant, and valuable.

Core Techniques and Strategies in Prompt Engineering

When working with generative AI, the goal is to create prompts that elicit useful, accurate, and relevant responses. This requires understanding both the technical aspects of prompt engineering and the psychological aspects of trust. Here are key techniques for mastering this process:

The Importance of Trust in AI Outputs

Trust plays a central role in whether users accept or reject AI-generated outputs. As the speaker noted, “Trust is the bridge between the known and the unknown.” For AI to be effective, especially in high-stakes fields like medicine or government applications, users must feel confident in the system’s reliability and fairness. Factors that foster this trust include:

  • Accuracy: Ensuring the output is based on factual information and up-to-date sources.
  • Reliability: Confirming that outputs remain consistent across different scenarios.
  • Personalization: Tailoring responses to individual needs and contexts.
  • Ethics: Adhering to ethical guidelines, avoiding bias, and maintaining cultural sensitivity.

Precision in Prompt Engineering: Essential Techniques

To build trust, prompts need to be structured in a way that maximizes clarity and minimizes ambiguity. Key methods include:

  • Role Prompting: Assigning specific roles, such as “act as a coding assistant,” guides the model in responding within a particular expertise framework. As the speaker shared, “Role prompting is really good in terms of getting it to go find all those billions of web pages it was trained on.”

  • Chain of Thought Prompting: By instructing the model to provide step-by-step reasoning, this method helps in breaking down complex queries and reducing errors. For example, prompting the model to explain each step in a calculation avoids “error piling,” where initial mistakes skew subsequent responses.

  • System Messages: Used primarily by developers, system messages define overarching rules or tones for the AI. These instructions are hidden from the end-user but ensure the model stays consistent, ethical, and aligned with specific guidelines.

Handling AI’s Limitations: Mitigating Hallucinations and Bias

“Hallucination” refers to instances where AI generates plausible-sounding but incorrect information. The speaker explained, “We all think that hallucination is a bug; it’s actually not a bug—it’s a feature, depending on what you’re trying to do.” For applications where accuracy is crucial, employing techniques like Retrieval-Augmented Generation (RAG) helps ground AI responses by referencing reliable external sources.

Optimizing Prompt Parameters for Desired Outputs

Adjusting parameters such as temperature, frequency penalties, and presence penalties can enhance the creativity or precision of AI responses. For example, higher temperatures lead to more creative, varied outputs, while lower settings make responses more predictable and factual. As the speaker noted, “Every word in a prompt matters,” so these settings allow for fine-tuning responses to suit specific needs.

Recap & Call to Action

Effective prompt engineering isn’t just about crafting prompts—it’s about understanding trust and precision. Key strategies include role prompting, step-by-step guidance, and adjusting AI parameters to manage reliability and relevance. Remember, the goal is to enhance user trust by ensuring outputs are clear, relevant, and ethically sound. Try implementing these techniques in your next AI project to see how they impact the quality and trustworthiness of your results.

2024-11-14 Gwern Branwen - How an Anonymous Researcher Predicted AI's Trajectory { www.dwarkeshpatel.com }

image-20241114141855768

Gwern is a pseudonymous researcher and writer. He was one of the first people to see LLM scaling coming. If you've read his blog, you know he's one of the most interesting polymathic thinkers alive.

In order to protect Gwern's anonymity, I proposed interviewing him in person, and having my friend Chris Painter voice over his words after. This amused him enough that he agreed.

2024-11-16 Modern & secure adaptive streaming on the Web - Katarzyna Dusza - CPH DevFest 2024 - YouTube { www.youtube.com }

image-20241115230122104

Introduction In today’s streaming-centric world, the demand for smooth, high-quality, and secure content playback has never been higher. Whether it’s movies, music, or live broadcasts, users expect seamless experiences across multiple devices and network conditions. For developers and media engineers, understanding adaptive streaming and secure content delivery on the web is critical to meet these demands. This guide dives into adaptive streaming, DRM encryption, and decryption processes, providing the essential tools and concepts to ensure secure, efficient media delivery.

Who This Guide Is For This guide is intended for software engineers, streaming platform developers, and media engineers focused on optimizing web streaming quality and security. Those interested in learning about adaptive bitrate streaming, DRM protocols, and encryption processes will find valuable insights and practical applications.

2024-11-16 Back to Basics: Unit Testing in C++ - Dave Steffen - CppCon 2024 - YouTube { www.youtube.com }

image-20241116002211334

Introduction

In modern software development, unit testing has become a foundational practice, ensuring that individual components of code—specifically functions—perform as expected. For C++ developers, unit testing offers a rigorous approach to quality control, catching bugs early and enhancing code reliability. This article covers the essentials of unit testing in C++, focusing on why and how to apply it effectively in your projects. Whether you’re an experienced developer or a newcomer in C++, this guide will clarify best practices and introduce powerful frameworks to streamline your testing efforts.

Core Concepts and Challenges in Unit Testing

Understanding Unit Testing in C++
Unit testing verifies the smallest unit of code, usually a function, to confirm it works as intended. Over the past decade, it has become essential for software development projects, preventing critical bugs from reaching production and reducing the risk of project failures. While the concept is straightforward, implementing effective unit tests in C++ brings unique challenges, such as determining what to test and choosing the right framework to manage tests efficiently.

Addressing Key Challenges

  1. Framework Selection: C++ offers various testing frameworks like Catch2, which simplifies setting up unit tests and provides structured error reporting.
  2. Consistent Definitions: Defining what qualifies as a unit test varies across the industry. This inconsistency can complicate efforts to standardize testing practices.
  3. Testing Complexity: Many projects require extensive, comprehensive testing to cover complex logic, edge cases, and integration points without compromising performance.

Implementing Unit Tests Effectively

Using a Framework
Frameworks like Catch2 streamline test organization, allowing developers to structure tests in isolated, repeatable units. They provide clear output, automated reporting, and enable testing of all components, highlighting each failure without halting the entire test process. The framework choice is critical in ensuring that tests are not only functional but also maintainable and understandable.

Structure and Placement of Tests
The closer tests are to the code they evaluate, the easier they are to maintain. Best practices recommend keeping test files within the same project structure, allowing for easy updates and reducing the chance of disconnects between tests and the code they assess.

Scientific Principles in Unit Testing

Effective unit testing is analogous to scientific experimentation. Each test is an “experiment” designed to verify code behavior by testing specific inputs and expected outcomes. Emphasizing falsifiability ensures that tests are objective and replicable, providing clear indications of any issues. Core scientific principles in testing include:

  1. Repeatability and Replicability: Tests should yield consistent results on repeated runs.
  2. Precision and Accuracy: Tests should be specific and unambiguous, with clear indications of success or failure.
  3. Thorough Coverage: Effective tests cover all code paths and edge cases, ensuring all possible scenarios are addressed.

Valid and Invalid Tests: Ensuring Accuracy

Accurate tests provide clear insights into code functionality. Avoid using the code’s output as its own test standard—known as circular logic—because it cannot reliably reveal bugs. Instead, source test expectations from reliable, external standards or reference calculations to ensure validity and rigor.

White Box vs. Black Box Testing Approaches

Two approaches define C++ unit testing:

  • White Box Testing: Tests directly access private code areas using workarounds like friend classes, allowing tests to examine internal states. However, this method ties tests closely to code structure, making future refactoring more challenging.
  • Black Box Testing: Tests only interact with public interfaces, testing expected behaviors from an end-user perspective. Black Box Testing is recommended for maintainability, as it allows refactoring without breaking tests by focusing on behavior rather than code internals.

Behavior-Driven Development (BDD) and Documentation

BDD guides developers to create tests focused on expected behaviors, providing intuitive documentation. Each test names and validates a specific behavior, such as "a new cup is empty," which makes understanding the code straightforward for future developers.

Designing Readable and Maintainable Tests

Readable and maintainable tests are simple and free of unnecessary complexity. Every unit test should focus on a single behavior, making tests easy to interpret and troubleshoot. This clarity is essential for enabling reviewers to understand test intentions without knowing the code intimately.

Test-Driven Development (TDD) and Its Role in Design

TDD reinforces software design by encouraging developers to write tests before code. Known as the Red-Green-Refactor cycle, TDD begins with writing a failing test (Red), creating code to make the test pass (Green), and refining the code (Refactor). This practice minimizes bugs from the outset, refines design, and builds a stable foundation of tests to verify code during refactoring.

· 36 min read

⌚ Nice watch!

In this blog post, I'll be sharing a collection of videos with concise content digests. These summaries extract the key points, focusing on the problem discussed, its root cause, and the solution or advice offered. I find this approach helpful because it allows me to retain the core information long after watching the video. This section will serve as a dedicated space for these "good watches," presenting only the most valuable videos and their takeaways in one place.

2024-08-18 Burnout - When does work start feeling pointless? | DW Documentary - YouTube { www.youtube.com }

image-20240817174213143

High-Level Categories and Subcategories of Problems in the Transcript

1. Workplace Dysfunction

1.1 Bureaucracy and Sabotage

  • Problem: Office life has adopted tactics of sabotage (00:01:13) similar to a WWII manual, where inefficiency is encouraged through endless meetings, paperless offices, and waiting for decisions in larger meetings.

  • Root Cause: Bureaucratic processes have unintentionally adopted methods once used deliberately to disrupt efficiency.

  • Solution: Recognize the signs of sabotage in office routines and seek to streamline decision-making and reduce unnecessary meetings.

    1.2 Administrative Bloat

  • Problem: Administrative jobs (00:03:28) have increased from 25% to 75% of the workforce. These include unnecessary supervisory, managerial, and clerical jobs.

  • Root Cause: Expansion of administrative roles rather than reducing workload with technology.

  • Solution: A shift towards more meaningful roles and reducing bureaucratic excess would help in streamlining operations.

2. Employee Burnout and Mental Health

2.1 Physical and Emotional Exhaustion

  • Problem: Burnout (00:10:11) manifests in intense physical exhaustion, to the point of difficulty performing basic tasks, and emotional breakdowns.

  • Root Cause: Overwork, perfectionism, and the pressure to perform.

  • Solution: Recognize the early signs of burnout, reduce workloads, and address stress proactively through support and time off.

    2.2 Pluralistic Ignorance

  • Problem: Employees feel isolated, believing they are the only ones struggling (00:15:19), while everyone else seems fine.

  • Root Cause: Lack of open communication about stress and burnout in the workplace.

  • Solution: Encourage honest discussions about workplace difficulties to reduce isolation and collective burnout.

3. Managerial and Leadership Failures

3.1 Misaligned Management Expectations

  • Problem: Many managers are promoted based on tenure or individual performance (00:24:26), rather than leadership skills, leading to poor team management.

  • Root Cause: Promotions based on irrelevant criteria, such as tenure, rather than leadership capability.

  • Solution: Companies need to create pathways for individual contributors to be rewarded without forcing them into management roles.

    3.2 Disconnect Between Managers and Employees

  • Problem: Managers often do not engage with employees on a personal level (00:26:32), leading to isolation and poor job satisfaction.

  • Root Cause: Lack of training for managers to build relationships with their teams.

  • Solution: Managers should be trained in emotional intelligence and encouraged to have personal conversations with employees.

4. Corporate Culture and Value Conflicts

4.1 Corporate Reorganizations

  • Problem: Reorganizations, layoffs, and restructuring cause ongoing stress for employees (00:34:28). People live in fear of losing their jobs despite hard work.

  • Root Cause: Frequent corporate restructuring often lacks a clear purpose beyond satisfying financial analysts or stockholders.

  • Solution: Limit reorganizations to only when necessary and focus on transparent communication to reduce employee anxiety.

    4.2 Cynicism Due to Unfair Treatment

  • Problem: When workplaces are seen as unfair (00:46:43), cynicism grows, leading to a toxic environment.

  • Root Cause: Lack of transparency and fairness in company policies and actions, leading to distrust.

  • Solution: Implement fair policies and involve employees in decision-making to reduce feelings of exploitation.

5. Misalignment of Work and Purpose

5.1 Lack of Value in Work

  • Problem: Employees feel their work lacks social value (00:33:00). Despite hard work, they see no real-world impact or meaning.
  • Root Cause: The economic system rewards meaningless work more than jobs that provide immediate, tangible benefits to society.
  • Solution: Employers should align tasks with broader human values and ensure that workers understand the social impact of their contributions.

Summary of Key Problems and Solutions

  1. Workplace Dysfunction: Bureaucratic inefficiency, administrative bloat, and unnecessary meetings create a sense of sabotage in modern offices. Solution: Streamline decision-making and reduce bureaucratic roles.
  2. Employee Burnout: Burnout is widespread due to overwork, isolation, and emotional stress. Solution: Acknowledge the signs of burnout, reduce workload, and foster open communication.
  3. Managerial Failures: Many managers lack the skills to lead effectively, causing disengagement and poor team dynamics. Solution: Train managers in leadership and emotional intelligence.
  4. Corporate Culture: Frequent reorganizations and unfair treatment create cynicism and stress among employees. Solution: Ensure fair policies and minimize unnecessary restructurings.
  5. Lack of Meaningful Work: Employees feel disconnected from the social value of their work, seeing it as pointless. Solution: Align work tasks with human values and meaningful contributions.

The most critical issues are employee burnout and the disconnect between management and workers, both of which contribute to widespread dissatisfaction and inefficiency in workplaces. Addressing these through better leadership training, reducing unnecessary work, and improving workplace communication can lead to healthier, more engaged employees.

2024-10-13 How to Spend 14 Days in JAPAN 🇯🇵 Ultimate Travel Itinerary - YouTube { www.youtube.com }

image-20241013110107937

Here’s a streamlined travel plan for visiting some of Japan’s most iconic destinations, focusing on the essential experiences in each place. Follow this itinerary for a mix of history, nature, and food.

1. Shirakawago
Start your journey in Shirakawago, a mountain village known for its traditional Gassho-zukuri farmhouses and heavy winter snowfall. The buildings are arranged facing north to south to minimize wind resistance. Stay overnight in one of the farmhouses to fully experience the town.

  • Don't miss: The House of Pudding, serving Japan’s best custard pudding (2023 winner).

2. Takayama
Head to Takayama, a town in the Central Japan Alps, filled with traditional architecture and a retro vibe. Walk through the Old Town, and visit the Takayama Showa Museum, which perfectly captures Japan in the 1950s and 60s.

  • Must-try food: Hida Wagyu beef is a local specialty, available in street food stalls or restaurants. You can enjoy a stick of wagyu for around 600 yen.

3. Kyoto
Next, visit the cultural capital, Kyoto, and stay in a Machiya townhouse in the Higashiyama district for an authentic experience. Kyoto offers endless shrines and temples to explore.

  • Fushimi Inari Shrine: Famous for its 10,000 red Torii gates leading up Mount Inari. The gates are donated by businesses for good fortune.
  • Kinkakuji (Golden Pavilion): One of Kyoto’s most iconic landmarks, glistening in the sunlight.
  • Tenryuji Temple: A 14th-century Zen temple with a garden and pond, virtually unchanged for 700 years.

4. Nara
Travel to Nara, a smaller city where you can explore the famous Nara Park, home to 1,200 friendly deer. You can bow to the deer, and they'll bow back if they see you have crackers.

  • Todaiji Temple: Visit the 49-foot-tall Buddha and try squeezing through the pillar’s hole (said to grant enlightenment).
  • Yomogi Mochi: Don’t miss this chewy rice cake treat filled with red bean paste, but eat it carefully!

5. Osaka
End your trip in Osaka, known as the nation’s kitchen. Stay near Dotonbori to experience the neon lights and vibrant nightlife.

  • Takoyaki: Grab some fried octopus balls, Osaka’s most famous street food, but be careful—they’re hot!
  • Osaka Castle: Explore this iconic castle, though the interior is a modern museum.

This travel plan covers historical landmarks, must-try local foods, and unique cultural experiences, offering a comprehensive taste of Japan.

2024-10-12 How to Delete Code - Matthew Jones - ACCU 2024 - YouTube { www.youtube.com }

image-20241012110250287

Quote from attendee:

"Code is a cost. Code is not an asset. We should have less of it, not more of it."

Other thoughts on this topic:

Martin Fowler (Agile advocate and software development thought leader) has expressed similar thoughts in his writings. In his blog post "Code as a Liability," he explains that every line of code comes with maintenance costs, and the more code you have, the more resources are needed to manage it over time:

"The more code you have, the more bugs you have. The more code you have, the harder it is to make changes."

John Ousterhout, a professor and computer scientist, has echoed this in his book "A Philosophy of Software Design." He talks about code complexity and how more code often means more complexity, which in turn leads to more problems in the future:

"The most important thing is to keep your code base as simple as possible."

(GPT Summary)

Cppcheck - A tool for static C/C++ code analysis

  1. Dead Code Identification and Removal

    • Importance of removing dead code: Dead code clutters the codebase, adds complexity, and increases maintenance costs. Action: Actively look for dead functions or features that are no longer in use. For example, if a feature has been deprecated but not fully removed, ensure its code is deleted.
    • Techniques for identifying dead code: Use tools like static analysis, manual code review, or testing. Action: Rename the suspected dead function, rebuild, and let the compiler flag errors where the function is still being used.
    • Using static analysis and compilers: These tools help identify unreachable or unused code. Action: Regularly run tools like CPPCheck or Clang Analyze in your CI pipeline to detect dead code.
    • Renaming functions to detect dead code: A simple way to identify unused code. Action: Rename a function (e.g., myFunction to myFunction_old), and see if it causes errors during the build process. If not, the function is likely dead and can be safely removed.
    • Deleting dead features and their subtle dependencies: Features often have dependencies that may be missed. Action: When removing a dead feature, check for subtle references, such as menu items, command-line flags, or other parts of the system that may still rely on it.
  2. Caution with Large Codebase Changes

    • Taking small, careful steps: Removing too much at once can lead to major issues. Action: Remove a small function or part of the code, test, and repeat. For example, instead of removing an entire module, start with one function.
    • Avoiding aggressive feature removal: Over-removal can cause unexpected failures. Action: Approach code deletion incrementally. Don’t aim to delete an entire feature at once; instead, tease out its components slowly to avoid breaking dependencies.
    • Moving code to reduce scope: If code is not needed at the global scope, move it to a more local context. Action: Move public functions from header files to .cpp files and see if any errors occur. This can help isolate the function’s scope and make it easier to remove later.
    • Risk of breaking builds: Avoid breaking the build with massive deletions. Action: Ensure you take incremental steps, test continuously, and use atomic commits to revert small changes if needed.
  3. Refactoring Approaches

    • Iterative refactoring and deletion: Refactor code in small steps to ensure stability. Action: When removing a dead function, check what other code depends on it. If a function calling it becomes unused, continue refactoring iteratively.
    • Refactoring legacy code: Legacy code can often hide dead functions. Action: Slowly reduce the scope of legacy functions by moving them to lower levels (like .cpp files) to see if their usage drops. If not used anymore, delete them.
    • Using unit tests for refactoring: Ensure that code works after refactoring. Action: Wrap legacy string classes or custom utility functions in unit tests, then replace the core logic with modern STL alternatives. If the tests pass, the old code can be removed safely.
    • Replacing custom features with third-party libraries: Many custom solutions from the past can now be replaced by modern libraries. Action: If you have a custom logger class, consider replacing it with a more standardized and robust library like spdlog.
  4. Working with Tools

    • Using plugins or IDEs: Most modern IDEs can help identify dead code. Action: Use Visual Studio or IntelliJ plugins that flag unreachable code or highlight unused functions.
    • Leveraging Compiler Explorer: Use online tools to isolate and test specific snippets of code. Action: If you can’t refactor in the main codebase, copy the function into Compiler Explorer (godbolt.org) and experiment with it there before making changes.
    • Setting compiler flags: Enable warnings for unreachable or unused code. Action: Use -Wall or -Wextra in GCC or Clang to flag potentially dead code. For example, set Wextra in your build system to catch unused variables and unreachable code.
    • Running static analysis tools: Integrate tools like CPPCheck into your CI pipeline. Action: Add CPPCheck to Jenkins and run it with -j to detect dead functions across multiple translation units.
  5. Source Control Best Practices

    • Atomic commits: Always break down deletions into small, reversible changes. Action: Commit changes one at a time and with meaningful messages, such as "Deleted unused function myFunction()." This allows you to easily revert just one commit if needed.
    • Small steps and green builds: Ensure the build passes after each commit. Action: Commit your changes, wait for the CI pipeline to return a green build, and only proceed if everything passes.
    • Keeping history in the main branch: Deleting code in a branch risks losing history. Action: Perform deletions in the main branch with proper commit messages. In Git, avoid squashing commits when merging deletions, as this may obscure your work history.
  6. Communication and Collaboration

    • Educating teams about dead code: Not everyone understands the importance of cleaning up dead code. Action: When you find dead code, educate the team by documenting what you’ve removed and why.
    • Communicating when deleting shared code: Deleting code that others may rely on needs consensus. Action: Start a conversation with the team and document the code you intend to delete. Make sure the removal won’t disrupt anyone’s work.
    • Seasonal refactoring: Pick quieter periods like holidays for large-scale refactoring. Action: Plan code cleanups during slower times (e.g., Christmas or summer) when fewer developers are working. For example, take the three days between Christmas and New Year to remove unused code while avoiding merge conflicts.
  7. Handling Legacy Features

    • Addressing dead features tied to legacy systems: These can be tricky to remove without causing issues. Action: Mark features as deprecated first, communicate with stakeholders, and plan their removal after a safe period.
    • Managing end-of-life features carefully: Inform customers and stakeholders before removing any external-facing features. Action: Announce the feature’s end-of-life, allow time for feedback, and only remove the feature after this period (e.g., six months).
  8. Miscellaneous Code Cleanup

    • Removing unnecessary includes: Many includes are added but never removed. Action: Comment out all include statements at the top of a file, then add them back one by one to see which ones are actually needed.
    • Deleting repeated or needless code: Repeated code should be factored into functions or libraries. Action: If you find duplicated code, refactor it into a helper function or a shared library to reduce repetition.
  9. Comments in Code

    • Avoiding inane comments: Comments that explain obvious code operations are distracting. Action: Delete comments like “// increment i by 1” that explain simple logic you can deduce from reading the code.
    • Recognizing risks in outdated comments: Old comments can hide the fact that code has changed. Action: When refactoring, ensure comments are either updated or removed to avoid misleading information about the code’s purpose.
    • Focusing on clean code: Let the code speak for itself. Action: Favor well-written, self-explanatory code that requires minimal commenting. For instance, use descriptive function names like calculateTotal() instead of adding comments like “// This function calculates the total.”
  10. When to Delete Code

    • Timing deletions carefully: Avoid risky deletions right before a release. Action: Plan large code cleanups in advance, and avoid removing any code near a major product release when stability is crucial.
    • Refactoring during quiet periods: Use downtimes, such as post-release, for cleanup. Action: After a major release or during holidays, revisit old tasks marked for deletion.
    • Tracking deletions in the backlog: Use a backlog to schedule code deletions that can’t be done immediately. Action: Create a "technical debt" section in your backlog and record all dead code identified for future cleanup.
  11. Final Thoughts on Refactoring

    • Challenging bad habits: Sometimes teams resist deleting old code. Action: Slowly introduce refactoring practices, starting small to show the benefits.
    • Measuring and recording progress: Keep track of all dead code and document changes. Action: Use tools like Jira to track deletions and improvements in code health.
    • Deleting responsibly: Don’t delete code just for the sake of it. Action: Ensure that deleted code is truly unused and won’t cause issues down the line. For example, test thoroughly before removing any core functionality.

2024-09-29 Insights From an L7 Meta Manager: Interviews, Onboarding, and Building Trust - YouTube { www.youtube.com }

image-20241013110406428

High-Level Categories of Problems and Solutions

1. Onboarding and Adjustment in New Senior Roles (00)

  • Problem: Senior engineers often struggle when transitioning to new companies, particularly in adjusting to different company cultures and technical structures.

    • Context: Moving between large tech companies like Amazon and Meta presents challenges due to different coding practices (e.g., service-oriented architecture vs. monorepo) and operational structures.

    • Root Cause: A mismatch between previous experiences and new company environments.

    • Solution: Avoid trying to change the new environment immediately. Instead, focus on learning and adapting to the culture. Build trust with the team over six to seven months before attempting major changes.

    • Timestamp: 00:03:30

    • Quote:

      "If you go join another company, you've got a lot to learn, you've got a lot of relationships to build, and you ultimately need to figure out how to generalize your skill set."

2. Building Trust and Relationships in Senior Roles (00)

  • Problem: Senior engineers often fail to invest time in building relationships and trust with new teams.

    • Context: New senior engineers may rush into projects without first establishing rapport with their colleagues.

    • Root Cause: Lack of emphasis on trust-building leads to resistance from teams.

    • Solution: Dedicate the first few months to relationship-building and understanding the team’s dynamics. Don’t attempt large projects right away.

    • Timestamp: 00:05:00

    • Quote:

      "If you rush that process, you're going to be in for a hell of a lot of resistance."

3. Poor Ramp-up Periods for New Engineers (00)

  • Problem: New hires are often not given enough time to ramp up before being evaluated in performance reviews.

    • Context: Lack of structured ramp-up time for new senior hires can lead to poor performance evaluations early on.

    • Root Cause: Managers failing to allocate sufficient time for new employees to learn and adapt.

    • Solution: Managers should provide clear onboarding timelines (6-7 months) for engineers to integrate into teams, with gradual increases in responsibility.

    • Timestamp: 00:09:00

    • Quote:

      "The main thing that we did is just basically give them a budget of some time... to build up their skill set and trust with the team."

4. Mistakes in Adapting to New Cultures (00)

  • Problem: Senior engineers often try to change new environments too quickly, leading to friction.

    • Context: Engineers accustomed to one type of tech stack or organizational process may attempt to enforce old methods in a new setting.

    • Root Cause: Engineers feel uncomfortable in the new culture and attempt to recreate their old environment.

    • Solution: Focus on understanding the reasons behind the new company's practices before suggesting any changes.

    • Timestamp: 00:07:00

    • Quote:

      "Failure mode... is to try to change everything... and that's almost always the wrong approach."

Performance Reviews and Evaluations

5. Misunderstanding the Performance Review Process (00)

  • Problem: Engineers sometimes misunderstand how they are evaluated in performance reviews, especially during their first year.

    • Context: There’s often confusion about how contributions during the onboarding period are assessed.

    • Root Cause: Lack of transparency or communication from managers regarding performance criteria.

    • Solution: Managers must clarify performance expectations and calibration processes, while engineers should ask for regular feedback to stay on track.

    • Timestamp: 00:10:00

    • Quote:

      "Some managers just don't do a good job of actually setting the stage for new hires."

6. Lack of Visibility in Performance Reviews (00)

  • Problem: Senior engineers often fail to showcase their work to the broader team, limiting their visibility in performance reviews.

    • Context: In larger organizations, a single manager is not solely responsible for performance evaluations. Feedback from other team members and leadership is critical.

    • Root Cause: Not socializing work with peers or senior leadership.

    • Solution: Regularly communicate your contributions to multiple stakeholders, not just your direct manager.

    • Timestamp: 00:14:00

    • Quote:

      "Socialize the work that you're doing with those other people... it's even better if you've had a chance to actually talk with them."

7. Taking on Projects Too Early (00)

  • Problem: Engineers may overestimate their readiness and take on large projects too soon after joining a new company.

    • Context: Jumping into big projects without adequate preparation can lead to mistakes and strained relationships.

    • Root Cause: Lack of patience and eagerness to prove oneself.

    • Solution: Focus on smaller tasks and gradually scale up responsibility after establishing trust and familiarity with the environment.

    • Timestamp: 00:06:30

    • Quote:

      "Picking up a massive project as soon as you join a company is probably not the best idea."

Behavioral and Technical Interviews

8. Lack of Depth in Behavioral Interviews (00)

  • Problem: Engineers often struggle with behavioral interviews, particularly when it comes to self-promotion and clearly discussing their impact.

    • Context: Senior engineers may downplay their role in leading large projects, failing to convey their leadership and influence.

    • Root Cause: Engineers often feel uncomfortable talking about their own contributions.

    • Solution: Engineers need to learn how to take credit for their work and articulate the complexity of their projects in interviews.

    • Timestamp: 00:19:00

    • Quote:

      "If you simply talk about your team and you aren't framing this as you driving, it doesn't demonstrate the level that I'm looking for."

9. Over-Reliance on Rehearsed Answers in Design Interviews (00)

  • Problem: In design interviews, engineers sometimes rely on rehearsed answers, which doesn’t showcase their real problem-solving abilities.

    • Context: Instead of improvising, engineers often recite previously learned solutions that don't apply to the specific design problem at hand.

    • Root Cause: A lack of confidence in applying their experience to new problems.

    • Solution: Approach design problems creatively by focusing on unique elements of the task and how past experience can offer novel solutions.

    • Timestamp: 00:17:00

    • Quote:

      "You're really supposed to be scribbling outside the lines."

Key Problems and Their Solutions Summary:

  1. Onboarding and Adjustment: Senior engineers often face challenges adapting to new company cultures. Solution: Focus on learning the environment, and avoid trying to change it too quickly.
  2. Trust and Relationships: Lack of relationship-building leads to resistance. Solution: Take time to build rapport and trust with the team before diving into big projects.
  3. Performance Reviews: New hires may not understand performance expectations. Solution: Ensure transparency in review processes and socialize your contributions with key stakeholders.
  4. Interviews: Engineers may struggle in behavioral and design interviews. Solution: Take ownership of your contributions and avoid relying on rehearsed answers.

These are the most critical problems discussed in the transcript, with clear, actionable advice for each.

2024-09-24 LLMs gone wild - Tess Ferrandez-Norlander - NDC Oslo 2024 - YouTube { www.youtube.com }

Tess Ferrandez-Norlander (works at Microsoft)

image-20240923230759509

image-20240923231052659

2024-09-24 Overview - Chainlit { docs.chainlit.io }

image-20240923232539521

image-20240924110712641

2024-09-24 2406.04369 RAG Does Not Work for Enterprises { arxiv.org }

image-20240924111707846

2024-09-26 Your website does not need JavaScript - Amy Kapernick - NDC Oslo 2024 - YouTube { www.youtube.com }

image-20240926005552838

No JS (amyskapers.dev)

image-20240926235229401

2024-09-27 amykapernick/no_js { github.com }

2024-08-24 Reducing C++ Compilation Times Through Good Design - Andrew Pearcy - ACCU 2024 - YouTube { www.youtube.com }

image-20241013111059590

  1. Precompiled Headers: One of the most effective methods is using precompiled headers (PCH). This technique involves compiling the header files into an intermediate form that can be reused across different compilation units. By doing so, you significantly reduce the need to repeatedly process these files, cutting down the overall compilation time. Tools like CMake can automate this by managing dependencies and ensuring headers are correctly precompiled and reused across builds.

  2. Parallel Compilation: Another approach is parallel compilation. Tools like Make, Ninja, and distcc allow you to compile multiple files simultaneously, taking advantage of multi-core processors. For instance, using the -j flag in make or ninja enables you to specify the number of jobs (i.e., compilation tasks) to run in parallel, which can dramatically reduce the time it takes to compile large projects.

  3. Unity Builds: Unity builds are another technique where multiple source files are compiled together as a single compilation unit. This reduces the overhead caused by multiple compiler invocations and can be particularly useful for large codebases. However, unity builds can introduce some challenges, such as longer error messages and potential name collisions, so they should be used selectively.

  4. Code Optimization: Structuring your code to minimize dependencies can also be highly effective. Techniques include forward declarations, splitting projects into smaller modules with fewer interdependencies, and replacing heavyweight standard library headers with lighter alternatives when possible. By reducing the number of dependencies that need to be recompiled when a change is made, you can significantly decrease compile times.

  5. Caching Compilation Results: Tools like ccache store previous compilation results, which can be reused if the source files haven’t changed. This approach is particularly useful in development environments where small, incremental changes are frequent.

Here is the detailed digest from Andrew Pearcy's talk on "Reducing Compilation Times Through Good Design", along with the relevant project homepages and tools referenced throughout the discussion.

Video Title: Reducing Compilation Times Through Good Design

Andrew Pearcy, an engineering team lead at Bloomberg, outlines strategies for significantly reducing C++ compilation times. The talk draws from his experience of cutting build times from one hour to just six minutes, emphasizing practical techniques applicable in various C++ projects.

Motivation for Reducing Compilation Times

Pearcy starts by explaining the critical need to reduce compilation times. Long build times lead to context switching, reduced productivity, and delays in CI pipelines, affecting both local development experience and time to market. Additionally, longer compilation times make adopting static analysis tools like Clang-Tidy impractical due to the additional overhead. Reducing compilation time also optimizes resource utilization, especially in large companies where multiple machines are involved.

Overview of the C++ Compilation Model

He recaps the C++ compilation model, breaking it down into phases: pre-processing, compilation, and linking. The focus is primarily on the first two stages. Pearcy notes that large header files and unnecessary includes can significantly inflate the amount of code the compiler must process, which in turn increases build time.

Quick Wins: Build System, Linkers, and Compiler Caching

1. Build System:

  • Ninja: Pearcy recommends using Ninja instead of Make for better dependency tracking and faster incremental builds. Ninja was designed for Google's Chromium project and can often be an order of magnitude faster than Make. It utilizes all available cores by default, improving build efficiency.
  • Ninja Documentation: Ninja Build System

2. Linkers:

  • LLD and Mold: He suggests switching to LLD, a faster alternative to the default linker, LD. Mold, a modern linker written by Rui Ueyama (who also worked on LLD), is even faster but consumes more memory and is available for Unix platforms as open-source while being a paid service for Mac and Windows.
  • LLD: LLVM Project - LLD
  • Mold: Mold: A Modern Linker

3. Compiler Caching:

  • Ccache: Pearcy strongly recommends Ccache for caching compilation results to speed up rebuilds by avoiding recompilation of unchanged files. This tool can be integrated into CI pipelines to share cache across users, which can drastically reduce build times.
  • Ccache: Ccache

Detailed Techniques to Reduce Build Times

1. Forward Declarations:

  • Pearcy emphasizes the use of forward declarations in headers to reduce unnecessary includes, which can prevent large headers from being included transitively across multiple translation units. This reduces the amount of code the compiler needs to process.

2. Removing Unused Includes:

  • He discusses the challenge of identifying and removing unused includes, mentioning tools like Include What You Use and Graphviz to visualize dependencies and find unnecessary includes.
  • Include What You Use: Include What You Use
  • Graphviz: Graphviz

3. Splitting Protocol and Implementation:

  • To reduce dependency on large headers, he suggests the Pimpl (Pointer to Implementation) Idiom or creating interfaces that hide the implementation details. This technique helps in isolating the implementation in a single place, reducing the amount of code the compiler needs to process in other translation units.

4. Precompiled Headers (PCH):

  • Using precompiled headers for frequently included but rarely changed files, such as standard library headers, can significantly reduce build times. However, he warns against overusing PCHs as they can lead to diminishing returns if too many headers are precompiled.
  • CMake added support for PCH in version 3.16, allowing easy integration into the build process.
  • CMake Precompiled Headers: CMake Documentation

5. Unity Build:

  • Pearcy introduces Unity builds, where multiple translation units are combined into a single one, reducing redundant processing of headers and improving build times. This technique is particularly effective in reducing overall build times but can introduce issues like naming collisions in anonymous namespaces.
  • CMake provides built-in support for Unity builds, with options to batch files to balance parallelization and memory usage.
  • Unity Build Documentation: CMake Unity Builds

2024-07-26 Turbocharged: Writing High-Performance C# and .NET Code - Steve Gordon - NDC Oslo 2024 - YouTube

image-20241013111239413

Turbocharging Your .NET Code with High-Performance APIs

Steve, a Microsoft MVP and engineer at Elastic, discusses various high-performance APIs in .NET that can optimize application performance. The session covers measuring and improving performance, focusing on execution time, throughput, and memory allocations.

Performance in Application Code Performance is measured by how quickly code executes, the throughput (how many tasks an application can handle in a given timeframe), and memory allocations. High memory allocations can lead to frequent garbage collections, impacting performance. Steve emphasizes that performance optimization is contextual, meaning not every application requires the same level of optimization.

Optimization Cycle The optimization cycle involves measuring current performance, making small changes, and re-measuring to ensure improvements. Tools like Visual Studio profiling, PerfView, and JetBrains products are useful for profiling and measuring performance. BenchmarkDotNet is highlighted for micro-benchmarking, providing precise measurements by running benchmarks multiple times to get accurate data.

High-Performance Code Techniques

  1. Span<T>: A type that provides a read/write view over contiguous memory, allowing for efficient slicing and memory operations. It is highly efficient with constant-time operations for slicing.
  2. Array Pool: A pool for reusing arrays to avoid frequent allocations and deallocations. Using the ArrayPool<T>.Shared pool allows for efficient memory reuse, reducing short-lived allocations.
  3. System.IO.Pipelines: Optimizes reading and writing streams by managing buffers and minimizing overhead. It is particularly useful in scenarios like high-performance web servers.
  4. System.Text.Json: A high-performance JSON API introduced in .NET Core 3. It includes low-level Utf8JsonReader and Utf8JsonWriter for zero-allocation JSON parsing, as well as higher-level APIs for serialization and deserialization.

Examples and Benchmarks Steve presents examples of using these APIs in real-world scenarios, demonstrating significant performance gains. For instance, using Span<T> and ArrayPool in a method that processes arrays and messages led to reduced execution time and memory allocations. Switching to System.IO.Pipelines and System.Text.Json resulted in similar improvements.

"Slicing is really just changing the view over an existing block of memory... it's a constant time, constant cost operation."

"Measure your code, don’t assume, don’t make assumptions with benchmarks, it’s dangerous."

Conclusion Optimizing .NET code with high-performance APIs requires careful measurement and iterative improvements. While not all applications need such optimizations, those that do can benefit from significant performance gains. Steve concludes by recommending the book "Pro .NET Memory Management" for a deeper understanding of memory management in .NET.

2024-07-07 [Theo - t3․gg](https://www.youtube.com/@t3dotgg) My Spiciest Take On Tech Hiring - YouTube

2024-07-07 Haskell for all: My spiciest take on tech hiring

image-20240707101322999

High-Level Categories of Problems

  1. Tech Hiring Process Issues

    • Too Many Interviews (00): Problem: Candidates face multiple rounds of interviews (up to seven), causing frustration and inefficiency. Many find it counterproductive to go through so many technical interviews. Root Cause: Overly complex hiring processes that assume more interviews lead to better candidates. Advice: Implement a streamlined process with just one technical interview and one non-technical interview, each lasting no more than one hour. Long interview processes are unnecessary and may filter out good candidates.

    • Interview Redundancy (00): Problem: The same type of technical questions are asked repeatedly across different interviews, leading to duplication. Root Cause: Lack of coordination among interviewers and reliance on similar types of technical questions. Advice: Ensure each interviewer asks unique, relevant questions and does not rely on others to gather the same information. Interviewers should bear ultimate responsibility for gathering critical data.

    • Bias in Hiring (00): Problem: Interview processes are biased because hiring managers may already have preferred candidates (referrals, strong portfolios) before the process begins. Root Cause: Pre-existing relationships with candidates or prior work experience influence decisions. Advice: Avoid dragging out the process to mask biases—shorter, efficient interviews can make the bias more visible but manageable. Long processes don't necessarily filter out bias.

    • Long Interview Processes Favor Privilege (00): Problem: Prolonged interview panels select for candidates who can afford to take time off work, favoring those from more privileged backgrounds. Root Cause: Candidates from less privileged backgrounds cannot afford to engage in drawn-out interviews. Advice: Shorten the interview length and focus on relevant qualifications. Ensure accessibility for all candidates by keeping the process simple.

  2. Interview Process Structure

    • Diffusion of Responsibility (00): Problem: In group interview settings, responsibility for hiring decisions is diffused, leading to poor or delayed decision-making. Root Cause: No single person feels accountable for making the final decision. Advice: Assign ownership of decisions by giving specific interviewers responsibility for crucial aspects of the process. This reduces the likelihood of indecision and delayed outcomes.

    • Hiring Based on Team Fit vs. Technical Ability (00): Problem: Emphasis on technical abilities often overshadows the importance of team compatibility. Root Cause: Focus on technical skills without considering cultural and interpersonal dynamics within the team. Advice: Ensure that interviews assess not only technical competence but also how well candidates fit into the team dynamic. Incorporate group discussions or casual settings (e.g., lunch meetings) to gauge team vibe.

    • Ambiguity in Interviewer Opinions (00): Problem: Some interviewers avoid committing to clear opinions about candidates, preferring neutral stances. Root Cause: Lack of confidence or fear of being overruled by the majority. Advice: Use a rating system (e.g., 1–4 scale) that forces interviewers to choose a strong opinion, either in favor or against a candidate.

  3. Candidate Experience and Behavior

    • Negative Behavior in Interviews (00): Problem: Candidates who perform well technically but exhibit unprofessional behavior (e.g., showing up late or hungover) can still pass through the hiring process. Root Cause: Strong technical performance may overshadow concerns about professionalism and reliability. Advice: Balance technical performance with non-technical evaluations. Weigh behaviors such as punctuality and professional demeanor just as heavily as coding skills.

    • Take-Home Tests and Challenges (00): Problem: Some candidates view take-home challenges as extra, unnecessary work, while others see them as a chance to showcase skills. Root Cause: Different candidates have different preferences and responses to technical assessments. Advice: Offer take-home tests as an option, but don't make them mandatory. Adjust the evaluation method based on candidate preferences to ensure both parties feel comfortable.

  4. Systemic Issues in the Hiring Process

    • Healthcare Tied to Jobs (00): Problem: In the U.S., job-based healthcare forces candidates to accept positions they might not want or complicates transitions between jobs. Root Cause: The healthcare system is tied to employment, making job transitions risky. Advice: There's no direct solution provided here, but highlighting the need for systemic changes in healthcare could make the hiring process more equitable.

    • Lack of Feedback to Candidates (00): Problem: Many companies avoid giving feedback to candidates after interviews, leaving them unsure of their performance. Root Cause: Fear of legal liability or workload concerns. Advice: Provide constructive feedback to candidates, even if they aren't selected. It helps build long-term relationships and contributes to positive company reputation. Some of the best connections come from transparent feedback post-interview.

  5. Hiring for Senior Positions

    • Senior Candidates Have Low Tolerance for Long Processes (00): Problem: Highly qualified senior candidates are more likely to decline long and drawn-out interview processes. Root Cause: Senior candidates, due to their experience and expertise, are less willing to tolerate inefficient processes. Advice: Streamline the process for senior roles. Keep interviews short, efficient, and focused on relevant discussions. High-level candidates prefer concise assessments over lengthy ones.
  6. Hiring on Trust vs. Formal Interviews

    • Hiring Based on Relationships (00): Problem: Engineers with pre-existing relationships or referrals are more likely to be hired than those without, bypassing formal interviews. Root Cause: Prior work relationships build trust, which can overshadow the need for formal vetting. Advice: Trust-based hiring should be encouraged when there is prior working experience with the candidate. However, make efforts to balance trust with fairness by including formal evaluations where necessary.

Key Problems Summary

  • The length and complexity of the hiring process discourages many strong candidates, particularly senior-level applicants. Simplifying the process to two interviews (one technical and one non-technical) is recommended.
  • Bias in the hiring process, particularly when managers have pre-existing relationships with candidates, leads to unfair outcomes.
  • Long interview processes favor privileged candidates who can afford to take time off, disadvantaging those from less privileged backgrounds.
  • Providing feedback to candidates is crucial for building long-term relationships and ensuring a positive hiring experience, yet it's often avoided due to legal concerns.
  • Team fit is just as important as technical skills, and companies should incorporate group interactions to assess interpersonal dynamics.

Most Critical Issues and Solutions

  • Problem: Too many technical interviews create frustration and inefficiency. Solution: Use just one technical and one non-technical interview, and assign responsibility for gathering all relevant information during these sessions.

  • Problem: Bias due to pre-existing relationships. Solution: Shorten the process to expose bias more clearly and rely on trust-based hiring only when balanced with formal interviews.

  • Problem: Lack of feedback to candidates. Solution: Provide constructive feedback to help candidates improve and establish long-term professional relationships.

· 15 min read

Good Reads

2024-08-27 Four Lessons from 2023 That Forever Changes My Software Engineering Career | by Yifeng Liu | Medium | Medium { medium.com }

This past year, four key lessons transformed my approach to software engineering.

First, I learned that execution is as important as the idea itself. Inspired by Steve Jobs, who highlighted the gap between a great idea and a great product, I focused on rapid prototyping to test feasibility and internal presentations to gather feedback. I kept my manager informed to ensure we were aligned and honest about challenges.

Second, I realized that trust and credibility are fragile but crucial. As a senior engineer, I'm expected to lead by solving complex issues and guiding projects. I saw firsthand how failing to execute or pushing unrealistic timelines could quickly erode trust within my team.

The third lesson was about the importance of visibility. I understood that hard work could go unnoticed if I didn’t make it visible. I began taking ownership of impactful projects and increased my public presence through presentations and updates. I also honed my critical thinking to offer valuable feedback and identify improvement opportunities.

Finally, I learned to focus on changing myself rather than others. I used to try to change my team or company, but now I realize it’s more effective to work on my growth and influence others through my actions. Understanding the company’s culture and my colleagues' aspirations helped me align my efforts with my career goals.

These lessons have reshaped my career and how I approach my role as an engineer.

2024-08-28 Just use fucking paper, man - Andy Bell { andy-bell.co.uk }

27th of August 2024

I’ve tried Notion, Obsidian, Things, Apple Reminders, Apple Notes, Jotter and endless other tools to keep me organised and sure, Notion has stuck around the most because we use it for client stuff, but for todo lists, all of the above are way too complicated.

I’ve given up this week and gone back to paper and a pencil and I feel unbelievably organised and flexible, day-to-day. It’s because it’s simple. There’s nothing fancy. No fancy pen or anything like that either. Just a notebook and a pencil.

I’m in an ultra busy period right now so for future me when you inevitably get back to this situation: just. use. fucking. paper.

2024-08-29 The slow evaporation of the free/open source surplus – Baldur Bjarnason { www.baldurbjarnason.com }

I've been thinking a lot about the state of Free and Open Source Software (FOSS) lately. My concern is that FOSS thrives on surplus—both from the software industry and the labor of developers. This surplus has been fueled by high margins in the tech industry, easy access to investment, and developers who have the time and financial freedom to contribute to FOSS projects. However, I'm worried that these resources are drying up.

High interest rates are making investments scarcer, particularly for non-AI software, which doesn't really support open-source principles. The post-COVID economic correction is leading to layoffs and higher coder unemployment, which means fewer people have the time or incentive to contribute to FOSS. OSS burnout is another issue, with fewer fresh developers stepping in to replace those who are exhausted by maintaining projects that often lack supportive communities.

Companies are also cutting costs and questioning the value of FOSS. Why invest in open-source projects when the return on investment is uncertain? The rise of LLM-generated code is further disconnecting potential contributors from FOSS projects, weakening the communities that sustain them.

My fear is that FOSS is entering a period of decline. As the industry and labor surpluses shrink, FOSS projects might suffer from neglect, security issues, or even collapse. While some of this decline might be a necessary correction, it's hard not to worry about the future of the FOSS ecosystem, especially when we don't know which parts are sustainable and which are not.

2024-08-29 Why does getting a job in tech suck right now? (Is it AI?!?) – r y x, r { ryxcommar.com }

image-20240915141710361

2024-08-31 Using Fibonacci Numbers to Convert from Miles to Kilometers and Vice Versa { catonmat.net }

Take two consecutive Fibonacci numbers, for example 5 and 8.

And you're done converting. No kidding – there are 8 kilometers in 5 miles. To convert back just read the result from the other end – there are 5 miles in 8 km!

Another example.

Let's take two consecutive Fibonacci numbers 21 and 34. What this tells us is that there are approximately 34 km in 21 miles and vice versa. (The exact answer is 33.79 km.)

Mind = blown. Completely.

2024-09-11 The Art of Finishing | ByteDrum { www.bytedrum.com }

The article explores the challenge of unfinished projects and the cycle of starting with enthusiasm but failing to complete them. The author describes this as the Hydra Effect—each task completed leads to new challenges. Unfinished projects feel full of potential, but fear of imperfection or even success prevents many developers from finishing.

"An unfinished project is full of intoxicating potential. It could be the next big thing... your magnum opus."

However, leaving projects incomplete creates mental clutter, making it hard to focus and learn key lessons like optimization and refactoring. Finishing is crucial for growth, both technically and professionally.

"By not finishing, you miss out on these valuable learning experiences."

To break this cycle, the author offers strategies: define "done" early, focus on MVP (Minimum Viable Product), time-box projects, and separate ideation from implementation. Practicing small completions and using accountability are also recommended to build the habit of finishing.

The article emphasizes that overcoming the Hydra Effect requires discipline but leads to personal and professional growth.

2024-09-11 Improving Application Availability: The Basics | by Mario Bittencourt | SSENSE-TECH | Aug, 2024 | Medium { medium.com }

In this article, I introduce the essentials of application availability and how to approach high availability. High availability is measured by uptime percentage. Achieving 99.999% availability (five nines) means accepting no more than 5 minutes of downtime per year, which requires automation to detect and fix issues fast.

I discuss redundancy as a key strategy to improve availability by using backups for connectivity, compute resources, and persistence. If one component fails, the system switches to a secondary option. However, redundancy adds both cost and complexity. More components require advanced tools, like load balancers, to manage failures, but these solutions introduce their own reliability concerns.

Not every part of an application needs the same availability target. In an e-commerce system, for instance, I categorize components into tiers:

  • T1 (website and payments) must stay available at all times.
  • T2 (order management) allows some downtime.
  • T3 (fulfillment) can tolerate longer outages.
  • T4 (ERP) has the least strict requirements.

"Your goal is to perform an impact analysis and classify each component in tiers according to its criticality and customer impact."

By setting different availability targets for each tier, you can reduce costs while focusing on the most important parts of your system.

"All strategies to improve availability come with trade-offs, usually involving higher costs and complexity."

This sets the stage for future discussions on graceful degradation, asynchronous processing, and disaster recovery strategies.

2024-09-12 A Bunch of Programming Advice I’d Give To Myself 15 Years Ago Marcus' Blog { mbuffett.com }

If the team is constantly tripping over a recurring issue, it's crucial to fix the root cause, rather than repeatedly patching symptoms. The author mentions, "I decided to fix it, and it took ten minutes to update our subscription layer to call subscribers on the main thread instead," thereby removing the cause of crashes, streamlining the codebase, and reducing mental overhead.

Pace versus quality must be balanced based on context. In low-risk environments, it's okay to ship faster and rely on guardrails; in high-risk environments (like handling sensitive data), quality takes precedence. "You don’t need 100% test coverage or an extensive QA process, which will slow down the pace of development," when bugs can be fixed easily.

Sharpening your tools is always worth it. Being efficient with your IDE, shortcuts, and dev tools will pay off over time. Fast typing, proficiency in the shell, and knowing browser tools matter. Although people warn against over-optimizing configurations, "I don’t think I’ve ever seen someone actually overdo this."

When something is hard to explain, it's likely incidental complexity. Often, complexity isn't inherent but arises from the way things are structured. If you can't explain why something is difficult, it’s worth simplifying. The author reflects that "most of the complexity I was explaining was incidental... I could actually address that first."

Solve bugs at a deeper level, not just by patching the immediate issue. If a React component crashes due to null user data, you could add a conditional return, but it’s better to prevent the state from becoming null in the first place. This creates more robust systems and a clearer understanding of how things work.

Investigating bugs should include reviewing code history. The author discovered a memory leak after reviewing commits, realizing the issue stemmed from recent code changes. Git history can be essential for debugging complex problems that aren't obvious through logs alone.

Write bad code when needed to get feedback. Perfect code takes too long and may not be necessary in every context. It's better to ship something that works, gather feedback, and refine it. "If you err on the side of writing perfect code, you don’t get any feedback."

Make debugging easier by building systems that streamline the process. Small conveniences like logging state diffs after every update or restricting staging environment parallelism to 1 can save huge amounts of time. The author stresses, "If it’s over 50%, you should figure out how to make it easier."

Working on a team means asking questions when needed. Especially in the first few months, it's faster to ask a coworker for a solution than spending hours figuring it out solo. Asking isn’t seen as a burden, so long as it’s not something trivial that could be self-solved in minutes.

Maintaining a fast shipping cadence is critical in startups and time-sensitive projects. Speed compounds over time, and improving systems, reusable patterns, and processes that support fast shipping is essential. "Shipping slowly should merit a post-mortem as much as breaking production does."

This article reaction and discussion on youtube:

2024-09-12 Theo Unexpected Lessons I've Learned After 15 Years Of Coding - YouTube { www.youtube.com }

2024-09-14 We need to talk about "founder mode" - YouTube { www.youtube.com }

"Stop hiring for the things you don't want to do. Hire for the things you love to do so you're forced to deal with the things you don't want to do.

This is some of the best advice I've been giving lately. Early on, I screwed up by hiring an editor because I didn't like editing. Since I didn't love editing, I couldn't be a great workplace for an editor—I couldn't relate to them, and they felt alone. My bar for a good edit was low because I just wanted the work off my plate.

But when I started editing my own stuff, I got pretty good and actually started to like it. Now, I genuinely think I'll stop recording videos before I stop editing them. By doing those things myself, I ended up falling in love with them.

Apply this to startups: If you're a founder who loves coding, hire someone to do it so you can't focus all your time on it. Focus on the other crucial parts of your business that need your attention.

Don't make the mistake of hiring to avoid work. Embrace what you love, and let it force you to grow in areas you might be neglecting."

Original post: 2024-09-14 Founder Mode { paulgraham.com }

Theo

Breaking Through Organizational Barriers: Connect with the Doers, Not Just the Boxes

In large organizations, it's common to encounter roadblocks where teams are treated as "black boxes" on the org chart. You might hear things like, "We can't proceed because the XYZ team isn't available," or "They need more headcount before tackling this."

Here's a strategy that has made a significant difference for me:

Start looking beyond the org chart and reach out directly to the individuals who are making things happen.

How to find them?

  • Dive into GitHub or project repositories: See who's contributing the most code or making significant updates.
  • Identify the most driven team members: Every team usually has someone who's more passionate and proactive.
  • Reach out and build a connection: They might appreciate a collaborative partner who shares their drive.

Why do this?

  • Accelerate Progress: Bypass bureaucratic delays and get projects moving.
  • Build Valuable Relationships: These connections can lead to future opportunities, referrals, or even partnerships.
  • Expand Your Influence: Demonstrating initiative can set you apart and open doors within the organization.

Yes, there are risks. Your manager might question why you're reaching out independently, or you might face resistance. But consider the potential rewards:

  • Best Case: You successfully collaborate to solve problems, driving innovation and making a real impact.
  • Worst Case: Even if you face pushback, you've connected with someone valuable. If either of you moves on, that relationship could lead to exciting opportunities down the line.

2024-09-15 Why Scrum is Stressing You Out - by Adam Ard { rethinkingsoftware.substack.com }

📌 Sprints never stop. Sprints in Scrum are constant, unlike the traditional Waterfall model where high-pressure periods are followed by low-pressure times. Sprints create ongoing, medium-level stress, which is more damaging long-term than short-term, intense stress. Long-term stress harms both mental and physical health. Advice: Build in deliberate breaks between sprints. Allow teams time to recover, reflect, and recalibrate before the next sprint. Introduce buffer periods for less intense work or creative activities.

🔖 Sprints are involuntary. Sprints in a Scrum environment are often imposed on developers, leaving them no control over the process or duration. Lack of autonomy leads to higher stress, similar to studies where forced activity triggers stress responses in animals. Control over work processes can reduce stress and improve job satisfaction. Advice: Involve the team in the sprint planning process and give them a say in determining task durations, sprint length, and workload. Increase autonomy to reduce stress by tailoring the Scrum process to fit the team’s needs rather than rigidly following preset rules.

😡 Sprints neglect key supporting activities. Scrum focuses on completing tasks within sprint cycles but doesn’t allocate enough time for essential preparatory activities like brainstorming and research. The lack of preparation time creates stress and leads to suboptimal work because thinking and doing cannot be entirely separated. Advice: Allocate time within sprints for essential preparation, brainstorming, and research. Set aside dedicated periods for planning, learning, or technical exploration, rather than expecting full-time execution during the sprint.

🍷 Most Scrum implementations devolve into “Scrumfall.” Scrum is often mixed with Waterfall-like big-deadline pressures, which cancel out the benefits of sprints and increase stress. When major deadlines approach, Scrum practices are suspended, leading to a high-stress environment combining the worst aspects of both methodologies. Advice: Resist combining Waterfall-style big deadlines with Scrum. Manage stakeholder expectations upfront and break larger goals into smaller deliverables aligned with sprint cycles. Stick to Agile principles and avoid falling back into the big-bang, all-at-once delivery mode.

2024-09-15 HOW TO SUCCEED IN MRBEAST PRODUCTION (leaked PDF) { simonwillison.net }

The MrBeast definition of A, B and C-team players is one I haven’t heard before:

A-Players are obsessive, learn from mistakes, coachable, intelligent, don’t make excuses, believe in Youtube, see the value of this company, and are the best in the goddamn world at their job. B-Players are new people that need to be trained into A-Players, and C-Players are just average employees. […] They arn’t obsessive and learning. C-Players are poisonous and should be transitioned to a different company IMMEDIATELY. (It’s okay we give everyone severance, they’ll be fine).

I’m always interested in finding management advice from unexpected sources. For example, I love The Eleven Laws of Showrunning as a case study in managing and successfully delegating for a large, creative project.

Newsletters

2024-09-11 The web's clipboard { newsletter.programmingdigest.net }

2024-09-12 JavaScript Weekly Issue 704: September 12, 2024 { javascriptweekly.com }

· 15 min read

The Talk

2024-09-01 Investigating Legacy Design Trends in C++ & Their Modern Replacements - Katherine Rocha C++Now 2024 - YouTube { www.youtube.com }

Katherine Rocha

image-20240901150003068

GPT generated content (close to the talk content)

This digest is a comprehensive breakdown of the talk provided, which covered various advanced C++ programming techniques and concepts. Below, each point from the talk is identified and described in detail, followed by relevant C++ code examples to illustrate the discussed concepts.


1. SFINAE and Overload Resolution

The talk begins with a discussion on the use of SFINAE (Substitution Failure Is Not An Error) and its role in overload resolution. SFINAE is a powerful C++ feature that allows template functions to be excluded from overload resolution based on specific conditions, enabling more precise control over which function templates should be used.

Key Points:

  • SFINAE is used to selectively disable template instantiation based on the properties of template arguments.
  • Overload resolution in C++ allows for multiple functions or operators with the same name to be defined, as long as their parameters differ. The compiler decides which function to call based on the arguments provided.

C++ Example:

#include <type_traits>
#include <iostream>

// Template function enabled only for arithmetic types using SFINAE
template <typename T>
typename std::enable_if<std::is_arithmetic<T>::value, T>::type
add(T a, T b) {
return a + b;
}

// Overload for non-arithmetic types is not instantiated
template <typename T>
typename std::enable_if<!std::is_arithmetic<T>::value, T>::type
add(T a, T b) = delete;

int main() {
std::cout << add(5, 3) << std::endl; // OK: int is arithmetic
// std::cout << add("Hello", "World"); // Error: string is not arithmetic
return 0;
}

2. Compile-Time Error Messages

The talk transitions into how to improve compile-time error messages using static_assert and custom error handling in templates. By using these techniques, developers can provide clearer error messages when certain conditions are not met during template instantiation.

Key Points:

  • Use static_assert to enforce conditions at compile time, ensuring that the program fails to compile if certain criteria are not met.
  • Improve the readability of error messages by providing meaningful feedback directly in the code.

C++ Example:

#include <iostream>
#include <type_traits>

template<typename T>
void check_type() {
static_assert(std::is_integral<T>::value, "T must be an integral type");
}

int main() {
check_type<int>(); // OK
// check_type<double>(); // Compile-time error: T must be an integral type
return 0;
}

3. Concepts in C++20

The talk explores Concepts, a feature introduced in C++20, which allows developers to specify constraints on template arguments more succinctly and expressively compared to SFINAE. Concepts help in making templates more readable and the error messages more comprehensible.

Key Points:

  • Concepts define requirements for template parameters, making templates easier to understand and use.
  • Concepts improve the clarity of both template definitions and error messages.

C++ Example:

#include <concepts>
#include <iostream>

template<typename T>
concept Arithmetic = std::is_arithmetic_v<T>;

template<Arithmetic T>
T add(T a, T b) {
return a + b;
}

int main() {
std::cout << add(5, 3) << std::endl; // OK: int is arithmetic
// std::cout << add("Hello", "World"); // Error: concept 'Arithmetic' not satisfied
return 0;
}

4. Polymorphism and CRTP

The talk covers polymorphism and the Curiously Recurring Template Pattern (CRTP), a technique where a class template is derived from itself. CRTP allows for static polymorphism at compile time, which can offer performance benefits over dynamic polymorphism.

Key Points:

  • Runtime Polymorphism: Achieved using inheritance and virtual functions, but comes with runtime overhead due to the use of vtables.
  • CRTP: A pattern that enables polymorphism at compile-time, avoiding the overhead of vtables.

C++ Example:

#include <iostream>

// CRTP Base class
template<typename Derived>
class Base {
public:
void interface() {
static_cast<Derived*>(this)->implementation();
}

static void staticInterface() {
Derived::staticImplementation();
}
};

class Derived1 : public Base<Derived1> {
public:
void implementation() {
std::cout << "Derived1 implementation" << std::endl;
}

static void staticImplementation() {
std::cout << "Derived1 static implementation" << std::endl;
}
};

class Derived2 : public Base<Derived2> {
public:
void implementation() {
std::cout << "Derived2 implementation" << std::endl;
}

static void staticImplementation() {
std::cout << "Derived2 static implementation" << std::endl;
}
};

int main() {
Derived1 d1;
d1.interface();
Derived1::staticInterface();

Derived2 d2;
d2.interface();
Derived2::staticInterface();

return 0;
}

5. Deducing this in C++23

The discussion moves to deducing this, a feature introduced in C++23 that allows for more expressive syntax when working with member functions, particularly in the context of templates.

Key Points:

  • Deducing this enables more flexible and readable template code involving member functions.
  • This feature simplifies the syntax when this needs to be deduced as part of template metaprogramming.

C++ Example:

#include <iostream>

class MyClass {
public:
auto myMethod() const -> decltype(auto) {
return [this] { return *this; };
}

void print() const {
std::cout << "MyClass instance" << std::endl;
}
};

int main() {
MyClass obj;
auto f = obj.myMethod();
f().print(); // Outputs: MyClass instance
return 0;
}

6. Design Methodologies: Procedural, OOP, Functional, and Data-Oriented Design

The final section of the talk compares various design methodologies including Procedural, Object-Oriented Programming (OOP), Functional Programming (FP), and Data-Oriented Design (DOD). Each paradigm has its strengths and use cases, and modern C++ often blends these methodologies to achieve optimal results.

Key Points:

  • Procedural Programming: Focuses on a sequence of steps or procedures to accomplish tasks.
  • Object-Oriented Programming (OOP): Organizes code around objects and data encapsulation.
  • Functional Programming (FP): Emphasizes immutability and function composition.
  • Data-Oriented Design (DOD): Focuses on data layout in memory for performance, often used in game development.

C++ Example (Object-Oriented):

#include <iostream>
#include <vector>

class Telemetry {
public:
virtual void process() const = 0;
};

class InstantaneousEvent : public Telemetry {
public:
void process() const override {
std::cout << "Processing instantaneous event" << std::endl;
}
};

class LongTermEvent : public Telemetry {
public:
void process() const override {
std::cout << "Processing long-term event" << std::endl;
}
};

void processEvents(const std::vector<Telemetry*>& events) {
for (const auto& event : events) {
event->process();
}
}

int main() {
std::vector<Telemetry*> events = { new InstantaneousEvent(), new LongTermEvent() };
processEvents(events);

for (auto event : events) {
delete event;
}

return 0;
}

C++ Example (Functional Programming):

#include <iostream>
#include <vector>
#include <algorithm>

struct Event {
int time;
bool isLongTerm;
};

void processEvents(const std::vector<Event>& events) {
std::for_each(events.begin(), events.end(), [](const Event& event) {
if (event.isLongTerm) {
std::cout << "Processing long-term event at time " << event.time << std::endl;
} else {
std::cout << "Processing instantaneous event at time " << event.time << std::endl;
}
});
}

int main() {
std::vector<Event> events = { {1, false}, {2, true}, {3, false} };
processEvents(events);
return 0;
}

C++ Example (Data-Oriented Design):

#include <iostream>
#include <vector>

struct TelemetryData {
std::vector<int> instantaneousTimes;
std::vector<int> longTermTimes;
};

void processInstantaneous(const std::vector<int>& times) {
for (int time : times) {
std::cout << "Processing instantaneous event at time " << time << std::endl;
}
}

void processLongTerm(const std::vector<int>& times) {
for (int time : times) {
std::cout << "Processing long-term event at time " << time << std::endl;
}
}

int main() {
TelemetryData data = {


{ 1, 3, 5 },
{ 2, 4, 6 }
};

processInstantaneous(data.instantaneousTimes);
processLongTerm(data.longTermTimes);

return 0;
}

GPT generated content (with a bit of "hallucinations")

Here's the expanded digest with essential text and detailed code examples for each point, focusing on modern replacements for legacy C++ practices.


Legacy Pointers vs. Smart Pointers

Legacy Practice: Use of raw pointers, manual memory management, and explicit new and delete. This can lead to memory leaks, dangling pointers, and undefined behavior.

Modern Replacement: Use smart pointers like std::unique_ptr, std::shared_ptr, and std::weak_ptr to manage dynamic memory automatically.

// Legacy code
class LegacyClass {
int* data;
public:
LegacyClass() { data = new int[10]; }
~LegacyClass() { delete[] data; }
};

// Modern code
#include <memory>

class ModernClass {
std::unique_ptr<int[]> data;
public:
ModernClass() : data(std::make_unique<int[]>(10)) {}
// Destructor not needed, as std::unique_ptr handles memory automatically
};

Key Insight: Using smart pointers reduces the need for manual memory management, preventing common errors like memory leaks and dangling pointers.


C-Style Arrays vs. STL Containers

Legacy Practice: Use of C-style arrays, which require manual memory management and do not provide bounds checking.

Modern Replacement: Use std::vector for dynamic arrays or std::array for fixed-size arrays. These containers handle memory management internally and offer bounds checking.

// Legacy code
int arr[10];
for (int i = 0; i < 10; ++i) {
arr[i] = i * 2;
}

// Modern code
#include <vector>
#include <array>

std::vector<int> vec(10);
for (int i = 0; i < 10; ++i) {
vec[i] = i * 2;
}

std::array<int, 10> arr2;
for (int i = 0; i < 10; ++i) {
arr2[i] = i * 2;
}

Key Insight: STL containers provide better safety and ease of use compared to traditional arrays, and should be the default choice in modern C++.


Manual Error Handling vs. Exceptions and std::expected

Legacy Practice: Return codes or error flags to indicate failures, which can be cumbersome and error-prone.

Modern Replacement: Use exceptions for error handling, which separate normal flow from error-handling code. Use std::expected (from C++23) for functions that can either return a value or an error.

// Legacy code
int divide(int a, int b, bool& success) {
if (b == 0) {
success = false;
return 0;
}
success = true;
return a / b;
}

// Modern code with exceptions
int divide(int a, int b) {
if (b == 0) throw std::runtime_error("Division by zero");
return a / b;
}

// Modern code with std::expected (C++23)
#include <expected>

std::expected<int, std::string> divide(int a, int b) {
if (b == 0) return std::unexpected("Division by zero");
return a / b;
}

Key Insight: Exceptions and std::expected offer more explicit and manageable error handling, improving code clarity and robustness.


Void Pointers vs. Type-Safe Programming

Legacy Practice: Use of void* for generic programming, leading to unsafe code and difficult debugging.

Modern Replacement: Use templates for type-safe generic programming, ensuring that code is checked at compile time.

// Legacy code
void process(void* data, int type) {
if (type == 1) {
int* intPtr = static_cast<int*>(data);
// Process int
} else if (type == 2) {
double* dblPtr = static_cast<double*>(data);
// Process double
}
}

// Modern code
template <typename T>
void process(T data) {
// Process data safely with type known at compile time
}

int main() {
process(10); // Automatically deduces int
process(5.5); // Automatically deduces double
}

Key Insight: Templates provide type safety, ensuring errors are caught at compile time and making code easier to maintain.


Inheritance vs. Composition and Type Erasure

Legacy Practice: Deep inheritance hierarchies, which can lead to rigid designs and hard-to-maintain code.

Modern Replacement: Favor composition over inheritance. Use type erasure (e.g., std::function, std::any) or std::variant to achieve polymorphism without inheritance.

// Legacy code
class Base {
public:
virtual void doSomething() = 0;
};

class Derived : public Base {
public:
void doSomething() override {
// Implementation
}
};

// Modern code using composition
class Action {
std::function<void()> func;
public:
Action(std::function<void()> f) : func(f) {}
void execute() { func(); }
};

Action a([]() { /* Implementation */ });
a.execute();

// Modern code using std::variant
#include <variant>

using MyVariant = std::variant<int, double, std::string>;

void process(const MyVariant& v) {
std::visit([](auto&& arg) {
// Implementation for each type
}, v);
}

Key Insight: Composition and type erasure lead to more flexible and maintainable designs than traditional deep inheritance hierarchies.


Global Variables vs. Dependency Injection

Legacy Practice: Use of global variables for shared state, which can lead to hard-to-track bugs and dependencies.

Modern Replacement: Use dependency injection to provide dependencies explicitly, improving testability and modularity.

// Legacy code
int globalCounter = 0;

void increment() {
globalCounter++;
}

// Modern code using dependency injection
class Counter {
int count;
public:
Counter() : count(0) {}
void increment() { ++count; }
int getCount() const { return count; }
};

void useCounter(Counter& counter) {
counter.increment();
}

int main() {
Counter c;
useCounter(c);
std::cout << c.getCount();
}

Key Insight: Dependency injection enhances modularity and testability by explicitly providing dependencies rather than relying on global state.


Macros vs. constexpr and Inline Functions

Legacy Practice: Extensive use of macros for constants and inline code, which can lead to debugging challenges and obscure code.

Modern Replacement: Use constexpr for compile-time constants and inline functions for inline code, which are type-safe and easier to debug.

// Legacy code
#define SQUARE(x) ((x) * (x))

// Modern code using constexpr
constexpr int square(int x) {
return x * x;
}

// Legacy code using macro for constant
#define MAX_SIZE 100

// Modern code using constexpr
constexpr int maxSize = 100;

Key Insight: constexpr and inline functions offer better type safety and are easier to debug compared to macros, making the code more maintainable.


Manual Resource Management vs. RAII (Resource Acquisition Is Initialization)

Legacy Practice: Manual resource management, requiring explicit release of resources like files, sockets, and memory.

Modern Replacement: Use RAII, where resources are tied to object lifetime and automatically released when the object goes out of scope.

// Legacy code
FILE* file = fopen("data.txt", "r");
if (file) {
// Use file
fclose(file);
}

// Modern code using RAII with std::fstream
#include <fstream>

{
std::ifstream file("data.txt");
if (file.is_open()) {
// Use file
} // File is automatically closed when going out of scope
}

Key Insight: RAII automates resource management, reducing the risk of resource leaks and making code more reliable.


Explicit Loops vs. Algorithms and Ranges

Legacy Practice: Manual loops for operations like filtering, transforming, or accumulating data.

Modern Replacement: Use STL algorithms (std::transform, std::accumulate, std::copy_if) and ranges (C++20) to express intent more clearly and concisely.

// Legacy code
std::vector<int> vec = {1, 2, 3, 4, 5};
std::vector<int> result;

for (auto i : vec) {
if (i % 2 == 0) result.push_back(i * 2);
}

// Modern code using algorithms
#include <algorithm>
#include <vector>

std::vector<int> vec = {1, 2, 3, 4, 5};
std::vector<int> result;

std::transform(vec.begin(), vec.end(), std::back_inserter(result),
[](int x) { return x % 2 == 0 ? x * 2 : 0; });
result.erase(std::remove(result.begin(), result.end(), 0), result.end());

// Modern code using ranges (C++20)
#include <ranges>

auto result = vec | std::views::filter([](int x) { return x % 2 == 0; })
| std::views::transform([](int x) { return x * 2; });

Key Insight: STL algorithms and ranges make code more expressive and concise, reducing the likelihood of errors and enhancing readability.


Manual String Manipulation vs. std::string and std::string_view

Legacy Practice: Use of char* and

manual string manipulation with functions like strcpy, strcat, and strcmp.

Modern Replacement: Use std::string for dynamic strings and std::string_view for non-owning string references, which offer safer and more convenient string handling.

// Legacy code
char str1[20] = "Hello, ";
char str2[] = "world!";
strcat(str1, str2);
if (strcmp(str1, "Hello, world!") == 0) {
// Do something
}

// Modern code using std::string
#include <string>

std::string str1 = "Hello, ";
std::string str2 = "world!";
str1 += str2;
if (str1 == "Hello, world!") {
// Do something
}

// Modern code using std::string_view (C++17)
#include <string_view>

std::string_view strView = str1;
if (strView == "Hello, world!") {
// Do something
}

Key Insight: std::string and std::string_view simplify string handling, provide better safety, and eliminate the risks associated with manual C-style string manipulation.


Threading with Raw Threads vs. std::thread and Concurrency Utilities

Legacy Practice: Creating and managing threads manually using platform-specific APIs, which can be error-prone and non-portable.

Modern Replacement: Use std::thread and higher-level concurrency utilities like std::future, std::async, and std::mutex to manage threading in a portable and safe way.

// Legacy code (Windows example)
#include <windows.h>

DWORD WINAPI threadFunc(LPVOID lpParam) {
// Thread code
return 0;
}

HANDLE hThread = CreateThread(NULL, 0, threadFunc, NULL, 0, NULL);

// Modern code using std::thread
#include <thread>

void threadFunc() {
// Thread code
}

std::thread t(threadFunc);
t.join(); // Wait for thread to finish

// Modern code using std::async
#include <future>

auto future = std::async(std::launch::async, threadFunc);
future.get(); // Wait for async task to finish

Key Insight: std::thread and other concurrency utilities provide a portable and higher-level interface for multithreading, reducing the complexity and potential errors associated with manual thread management.


Function Pointers vs. std::function and Lambdas

Legacy Practice: Use of function pointers to pass functions as arguments or store them in data structures, which can be cumbersome and less flexible.

Modern Replacement: Use std::function to store callable objects, and lambdas to create inline, anonymous functions.

// Legacy code
void (*funcPtr)(int) = someFunction;
funcPtr(10);

// Modern code using std::function and lambdas
#include <functional>
#include <iostream>

std::function<void(int)> func = [](int x) { std::cout << x << std::endl; };
func(10);

Key Insight: std::function and lambdas offer a more flexible and powerful way to handle functions as first-class objects, making code more modular and expressive.

· 15 min read

[[TOC]]

How the things work

2024-08-31 Hypervisor From Scratch - Part 1: Basic Concepts & Configure Testing Environment | Rayanfam Blog { rayanfam.com }

Hypervisor From Scratch

The source code for Hypervisor From Scratch is available on GitHub :

[https://github.com/SinaKarvandi/Hypervisor-From-Scratch/]

2024-08-31 Reversing Windows Internals (Part 1) - Digging Into Handles, Callbacks & ObjectTypes | Rayanfam Blog { rayanfam.com }

2024-08-31 A Tour of Mount in Linux | Rayanfam Blog { rayanfam.com }

image-20240830200258339

2024-09-01 tandasat/Hypervisor-101-in-Rust: { github.com }

The materials of "Hypervisor 101 in Rust", a one-day long course, to quickly learn hardware-assisted virtualization technology and its application for high-performance fuzzing on Intel/AMD processors.

https://tandasat.github.io/Hypervisor-101-in-Rust/

image-20240901010106576

SAML

2024-09-02 A gentle introduction to SAML | SSOReady { ssoready.com }

image-20240901234406239

2024-09-02 Visual explanation of SAML authentication { www.sheshbabu.com }

image-20240901233107815

:thinking: Tricks!

2024-09-02 saving my git email from spam { halb.it }

Github has a cool option that replaces your private email with a noreply github email, which looks like this: 14497532+username@users.noreply.github.com. You just have to enable “keep my email address private” in the email settings. You can read the details in the github guide for setting your email privacy.

With this solution your email will remain private without loosing precious green squares in the contribution graph.

CRDT

2024-09-01 Movable tree CRDTs and Loro's implementation – Loro { loro.dev }

This article introduces the implementation difficulties and challenges of Movable Tree CRDTs when collaboration, and how Loro implements it and sorts child nodes.

Art and Assets

2024-09-01 Public Work by Cosmos { public.work }

image-20240901005017480

Game Theory 101

2024-09-01 ⭐️ Game Theory 101 (#1): Introduction - YouTube { www.youtube.com }

image-20240901010905811

2024-09-01 Finding Nash Equilibria through Simulation { coe.psu.ac.th }

image-20240901011057303

(Emacs)

2024-09-01 A Simple Guide to Writing & Publishing Emacs Packages { spin.atomicobject.com }

image-20240901153404884

2024-09-01 Emacs starter kit { emacs-config-generator.fly.dev }

image-20240901153233791

2024-09-01 dot-files/emacs-blog.org at 1b54fe75d74670dc7bcbb6b01ea560c45528c628 · howardabrams/dot-files { github.com }

image-20240901152917238

2024-08-31 ⭐️ The Organized Life - An Expert‘s Guide to Emacs Org-Mode – TheLinuxCode { thelinuxcode.com }

2024-08-31 ⭐️ Mastering Organization with Emacs Org Mode: A Complete Guide for Beginners – TheLinuxCode { thelinuxcode.com }

image-20240830193810145

2024-08-30 chrisdone-archive/elisp-guide: A quick guide to Emacs Lisp programming { github.com }

image-20240830134758680

2024-08-30 Getting Started With Emacs Lisp Hands On - A Practical Beginners Tutorial – Ben Windsor – Strat at an investment bank { benwindsorcode.github.io }

image-20240830135224690

Retro / Fun

2024-08-30 VisiCalc - The Early History - Peter Jennings { benlo.com }

image-20240830135448117

2024-09-01 paperclips { www.decisionproblem.com }

image-20240901153052859

2024-09-02 Seiko Originals: The UC-2000, A Smartwatch from 1984 – namokiMODS { www.namokimods.com }

image-20240901235821210

Inspiration

2024-09-02 Navigating Corporate Giants Jeffrey Snover and the Making of PowerShell - CoRecursive Podcast { corecursive.com }

image-20240902001457920

I joined Microsoft at a time when the company was struggling to break into the enterprise market. While we dominated personal computing, our tools weren’t suitable for managing large data centers. I knew we needed a command-line interface (CLI) to compete with Unix, but Microsoft’s culture was deeply rooted in graphical user interfaces (GUIs). Despite widespread skepticism, I was determined to create a tool that could empower administrators to script and automate complex tasks.

My first major realization was that traditional Unix tools wouldn’t work on Windows because Unix is file-oriented, while Windows is API-oriented. This led me to focus on Windows Management Instrumentation (WMI) as the backbone for our CLI. Despite this, I faced resistance from within. The company only approved a handful of commands when we needed thousands. To solve this, I developed a metadata-driven architecture that allowed us to efficiently create and scale commands, laying the foundation for PowerShell.

However, getting others on board was a challenge. When I encountered a team planning to port a Unix shell to Windows, I knew they were missing the bigger picture. To demonstrate my vision, I locked myself away and wrote a 10,000-line prototype of what would become PowerShell. This convinced the team to embrace my approach.

I was able to show them and they said, ‘Well, what about this?’ And I showed them. And they said, ‘What about that?’ And I showed them. Their eyes just got big and they’re like, ‘This, this, this.’

Pursuing this project meant taking a demotion, a decision that was financially and personally difficult. But I was convinced that PowerShell could change the world, and that belief kept me going. To align the team, I wrote the Monad Manifesto, which became the guiding document for the project. Slowly, I convinced product teams like Active Directory to support us, which helped build momentum.

The project faced another major challenge during Microsoft’s push to integrate everything with .NET. PowerShell, built on .NET, was temporarily removed from Windows due to broader integration issues. It took years of persistence to get it back in, but I eventually succeeded.

PowerShell shipped with Windows Vista, but I continued refining it through multiple versions, despite warnings that focusing on this project could harm my career. Over time, PowerShell became a critical tool for managing data centers and was instrumental in enabling Microsoft’s move to the cloud.

In the end, the key decisions—pushing for a CLI, accepting a demotion, and persisting through internal resistance—led to PowerShell's success and allowed me to make a lasting impact on how Windows is managed.

2024-09-02 Netflix/maestro: Maestro: Netflix’s Workflow Orchestrator { github.com }

image-20240901234630103

2024-09-01 The Scale of Life { www.thescaleoflife.com }

image-20240901153703324

2024-09-01 opslane/opslane: Making on-call suck less for engineers { github.com }

image-20240901152737861

2024-09-01 Azure Quantum | Learn with quantum katas { quantum.microsoft.com }

image-20240901152236367

2024-09-01 microsoft/QuantumKatas: Tutorials and programming exercises for learning Q# and quantum computing { github.com }

2024-09-01 EP122: API Gateway 101 - ByteByteGo Newsletter { blog.bytebytego.com }

2024-09-01 pladams9/hexsheets: A basic spreadsheet application with hexagonal cells inspired by: http://www.secretgeek.net/hexcel. { github.com }

image-20240901010426062

2024-09-01 Do Quests, Not Goals { www.raptitude.com }

The other problem with goals is that, outside of sports, “goal” has become an uninspiring, institutional word. Goals are things your teachers and managers have for you. Goals are made of quotas and Key Performance Indicators. As soon as I write the word “goals” on a sheet of paper I get drowsy.

image-20240901005313993

Here are some of the quests people took on:

  • Declutter the whole house
  • Record an EP
  • Prep six months’ worth of lessons for my students
  • Set up an artist’s workspace
  • Finish two short stories
  • Gain a basic knowledge of classical music
  • Fill every page in a sketchbook with drawings
  • Complete a classical guitar program
  • Make an “If I get hit by a bus” folder for my family

2024-08-30 oTranscribe { otranscribe.com }

image-20240830135922316

Security

2024-08-31 The State of Application Security 2023 • Sebastian Brandes • GOTO 2023 - YouTube { www.youtube.com }

image-20240830192609064

Sebastian, co-founder of Hey Hack, a Danish startup focused on web application security, presented findings from a large-scale study involving the scanning of nearly 4 million hosts globally. The study uncovered widespread vulnerabilities in web applications, including file leaks, dangling DNS records, vulnerable FTP servers, and persistent cross-site scripting (XSS) issues.

Key findings include:

  • File leaks: 29% of organizations had exposed sensitive data like source code, passwords, and private keys.
  • Dangling DNS records: Risks of subdomain takeover attacks due to outdated DNS entries.
  • Vulnerable FTP servers: 7.9% of servers running ProFTPD 1.3.5 were at risk due to a file copy module vulnerability.
  • XSS vulnerabilities: 4% of companies had known XSS issues, posing significant security risks.

Sebastian stressed that web application firewalls (WAFs) are not foolproof and cannot replace fixing underlying vulnerabilities. He concluded by emphasizing the importance of early investment in application security during the development process to prevent future attacks.

"We’ve seen lots of leaks or file leaks that are sitting out there—files that you probably would not want to expose to the public internet."

"Web application firewalls can maybe do something, but they’re not going to save you. It’s much, much better to go ahead and fix the actual issues in your application."

2024-08-30 BeEF - The Browser Exploitation Framework Project { beefproject.com }

image-20240830140152625

2024-08-31 stack-auth/stack: Open-source Clerk/Auth0 alternative { github.com }

Stack Auth is a managed user authentication solution. It is developer-friendly and fully open-source (licensed under MIT and AGPL).

Stack gets you started in just five minutes, after which you'll be ready to use all of its features as you grow your project. Our managed service is completely optional and you can export your user data and self-host, for free, at any time. image-20240830194951803

Markdown

2024-09-02 romansky/dom-to-semantic-markdown: DOM to Semantic-Markdown for use in LLMs { github.com }

image-20240901232517227

C || C++

2024-09-02 Faster Integer Parsing { kholdstare.github.io }

image-20240901233314132

2024-09-01 c++ - What is the curiously recurring template pattern (CRTP)? - Stack Overflow { stackoverflow.com }

image-20240901144719965

image-20240901144828823

The Era of AI

2024-09-02 txtai { neuml.github.io }

txtai is an all-in-one embeddings database for semantic search, LLM orchestration and language model workflows.

image-20240901235351463

2024-09-02 Solving the out-of-context chunk problem for RAG { d-star.ai }

Many of the problems developers face with RAG come down to this: Individual chunks don’t contain sufficient context to be properly used by the retrieval system or the LLM. This leads to the inability to answer seemingly simple questions and, more worryingly, hallucinations.

Examples of this problem

  • Chunks oftentimes refer to their subject via implicit references and pronouns. This causes them to not be retrieved when they should be, or to not be properly understood by the LLM.
  • Individual chunks oftentimes don’t contain the complete answer to a question. The answer may be scattered across a few adjacent chunks.
  • Adjacent chunks presented to the LLM out of order cause confusion and can lead to hallucinations.
  • Naive chunking can lead to text being split “mid-thought” leaving neither chunk with useful context.
  • Individual chunks oftentimes only make sense in the context of the entire section or document, and can be misleading when read on their own.

2024-08-30 MahmoudAshraf97/whisper-diarization: Automatic Speech Recognition with Speaker Diarization based on OpenAI Whisper { github.com }

2024-08-30 openai/whisper: Robust Speech Recognition via Large-Scale Weak Supervision { github.com }

2024-08-30 ggerganov/whisper.cpp: Port of OpenAI's Whisper model in C/C++ { github.com }

2024-09-01 microsoft/semantic-kernel: Integrate cutting-edge LLM technology quickly and easily into your apps { github.com }

2024-09-01 How to add genuinely useful AI to your webapp (not just chatbots) - Steve Sanderson - YouTube { www.youtube.com }

image-20240901012420483

The talk presented here dives into the integration of AI within applications, particularly focusing on how developers, especially those familiar with .NET and web technologies, can leverage AI to enhance user experiences. Here are the key takeaways and approaches from the session:

Making Applications Intelligent: The speaker discusses various interpretations of making an app "intelligent." It’s not just about adding a chatbot. While chatbots can create impressive demos quickly, they may not necessarily be useful in production. For AI to be genuinely beneficial, it must save time, improve job performance, and be accurate. The speaker challenges developers to quantify these benefits rather than rely on assumptions.

"If you try to put it into production, are people going to actually use it? Well, maybe it depends... does this thing actually save people time and enable them to do their job better than they would have otherwise?"

Patterns of AI Integration: The speaker introduces several UI-level AI enhancements such as Smart Components. These are experiments allowing developers to add AI to the UI layer without needing to rebuild the entire app. An example given is a Smart Paste feature that allows users to paste large chunks of text, which AI then parses and fills out the corresponding fields in a form. This feature improves user efficiency by reducing the need for repetitive and mundane tasks.

Another example is the Smart ComboBox, which uses semantic search to match user input with relevant categories, even when the exact terms do not appear in the list. This feature is particularly useful in scenarios where users may not know the exact terminology.

Deeper AI Integration: Moving beyond UI enhancements, the speaker explores deeper layers of AI integration within traditional web applications like e-commerce platforms. For instance, AI can be used to:

  • Semantic Search: Improve search functionality so that users don't need to know the exact phrasing.
  • Summarization: Automatically generate descriptive titles for support tickets to help staff quickly identify issues.
  • Classification: Automatically categorize support tickets to streamline workflows and save staff time.
  • Sentiment Analysis: Provide sentiment scores to help staff prioritize urgent issues.

"I think even in this very traditional web application, there's clearly lots of opportunity for AI to add a lot of genuine value that will help your staff actually be more productive."

Data and AI Integration: The talk also delves into the importance of data in AI applications. The speaker introduces the Semantic Kernel, a .NET library for working with AI, and demonstrates how to generate data using LLMs (Large Language Models) locally on the development machine using Ollama. The process involves creating categories, products, and related data (like product manuals) in a structured manner.

Data Ingestion and Semantic Search: The speaker showcases how to ingest unstructured data, such as PDFs, and convert them into a format that AI can use for semantic search. Using the PDFPig library, the speaker demonstrates extracting text from PDFs, chunking it into smaller, meaningful fragments, and then embedding these chunks into a semantic space. This allows for efficient, relevant searches within the data, enhancing the AI’s ability to provide accurate information quickly.

Implementing Inference with AI: As the talk progresses, the speaker moves on to implementing AI-based inference within a Blazor application. By integrating summarization directly into the workflow, the application can automatically generate summaries of customer interactions, helping support staff to quickly understand the context of a ticket without reading through the entire conversation history.

"I want to generate an updated summary for it... Generate a summary of the entire conversation log at that point."

Function Calling and RAG (Retrieval-Augmented Generation): The speaker discusses a more complex AI pattern—RAG—which involves the AI model retrieving specific data to answer queries. While standard RAG implementations rely on specific AI platforms, the speaker demonstrates a custom approach that works across various models, including locally run models like Ollama. This approach involves checking if the AI has enough context to answer a question and then retrieving relevant information if needed.

Job interview / Algorithms

2024-09-01 Understanding B-Trees: The Data Structure Behind Modern Databases - YouTube { www.youtube.com }

image-20240901011314149

Editing Distance

2024-09-02 Needleman–Wunsch algorithm - Wikipedia { en.wikipedia.org }

2024-09-02 Levenshtein distance - Wikipedia { en.wikipedia.org }

function LevenshteinDistance(char s[1..m], char t[1..n]):
// for all i and j, d[i,j] will hold the Levenshtein distance between
// the first i characters of s and the first j characters of t
declare int d[0..m, 0..n]

set each element in d to zero

// source prefixes can be transformed into empty string by
// dropping all characters
for i from 1 to m:
d[i, 0] := i

// target prefixes can be reached from empty source prefix
// by inserting every character
for j from 1 to n:
d[0, j] := j

for j from 1 to n:
for i from 1 to m:
if s[i] = t[j]:
substitutionCost := 0
else:
substitutionCost := 1

d[i, j] := minimum(d[i-1, j] + 1, // deletion
d[i, j-1] + 1, // insertion
d[i-1, j-1] + substitutionCost) // substitution

return d[m, n]

· 13 min read

Newsletters

2024-08-26 JavaScript Weekly Issue 701: August 22, 2024 { javascriptweekly.com }

Good Reads

2024-08-26 ⭐️ On Writing Well | nikhil.bafna { zodvik.com }

image-20240825174540032

Tech Talks

2024-08-30 Messaging: The fine line between awesome and awful - Laila Bougria - NDC Oslo 2024 - YouTube { www.youtube.com }

image-20240829180120559

Here's a digest of the talk:

I started with a light-hearted introduction about my cultural background and how it relates to having a siesta after lunch, which isn’t an option today since I'm giving this talk. About a decade ago, I was working on a project where we were building a retail system from scratch for a client. Initially, we created a monolithic architecture, which worked well for a while. However, as the business grew, we faced challenges. We saw increased demand and the architecture started showing its limitations. We experienced issues like failed requests, high strain on the database, and even system crashes.

Given the new demands, we decided to evolve our architecture by moving to a message-based system. We hoped this would solve our problems by improving performance, increasing resilience, and allowing easier scaling. However, we quickly realized that the transition wasn’t as smooth as expected. Instead of getting faster, the system became slower, and we started experiencing issues with UI inconsistency. Customers reported cases where the system didn't reflect their actions, leading to confusion and a poor user experience. We also encountered duplicate messages and messages arriving out of order, which led to significant failures and side effects in the system.

One critical lesson we learned was the importance of understanding the shift from synchronous to asynchronous communication. In a synchronous system, there's a direct, immediate response. But in an asynchronous system, messages might take a while to process, leading to delays and out-of-order execution. This can cause unexpected behaviors in the system, making troubleshooting a lot more challenging.

To address the issues with communication patterns, we explored different messaging patterns like one-way communication, request-response, and publish-subscribe. Each has its use case, but we learned that choosing the right pattern is crucial for system stability. For instance, publish-subscribe can be overused, leading to what I call the "passive-aggressive publisher" problem, where a service publishes an event expecting others to act on it, but without direct control, this can cause problems.

A key takeaway is that decoupling doesn’t happen automatically in a message-based system. It requires deliberate effort to identify service boundaries and manage coupling properly. When splitting a monolith, it’s crucial to ask the right questions about the domain and not just accept the default ordering of processes. For example, questioning whether the order in which tasks are executed is necessary can help in finding opportunities for parallel execution, thereby improving efficiency.

We also found that managing SLA (Service Level Agreements) became essential in an asynchronous environment. We started using delayed messages to ensure that tasks were completed within an acceptable time frame. This helped us recover gracefully from both technical and business failures, like handling payment processing delays or credit card issues.

In the end, it’s not just about transitioning to a new architecture but about understanding the trade-offs and challenges that come with it. The key is to balance the benefits of decoupling with the need to maintain order and consistency in the system. By carefully choosing the right communication patterns and managing the inevitable coupling, we can build systems that are both scalable and resilient, even in the face of growing demand.

This journey taught us that evolving a system architecture isn’t just about adopting new technologies but also about adapting our approach to fit the new reality. And sometimes, the lessons learned the hard way are the most valuable ones.

“One of the things we also observed is that sometimes we would receive duplicate messages, and the thing is, we didn’t really account for that. So that’s when we started to see failures and even side effects sometimes.”

“If you need a response with any data to continue when you publish an event—no. Then again, passive-aggressive communication and finally if you need any control over who receives or subscribes to that event—also not a good fit.”

The talk emphasizes the importance of thoughtful architecture decisions, especially when transitioning to a message-based system, and the need for continuous collaboration with business stakeholders to align the system’s behavior with business requirements.

Not a financial advise

2024-08-30 Ditch Banks — Go With Money Market Funds and Treasuries { thefinancebuff.com }

2024-08-30 Ditch banks – Go with money market funds and treasuries | Hacker News { news.ycombinator.com }

image-20240829181735734

Inspiration

2024-08-30 YTCH { ytch.xyz }

https://news.ycombinator.com/item?id=41247023 If YouTube had actual channels

image-20240829190610309

2024-08-30 GlyphDrawing.Club -blog { blog.glyphdrawing.club }

image-20240829185659044

2024-08-30 Vanilla JSX { vanillajsx.com }

2024-08-30 VanillaJSX.com | Hacker News { news.ycombinator.com }

image-20240829181223208

2024-08-30 Blender Shortcuts { hollisbrown.github.io }

image-20240829181030345

🏴‍☠️ Borrow it!

2024-08-30 clemlesne/scrape-it-now: A website to scrape? There's a simple way. { github.com }

image-20240829180644387

⭐️ Simplify HTML / Reader view

2024-08-30 aaronsw/html2text: Convert HTML to Markdown-formatted text. { github.com }

2024-08-30 Tracking supermarket prices with playwright { www.sakisv.net }

image-20240829190924245

The Era of AI

2024-08-26 chartdb/chartdb: Free and Open-source database diagrams editor, visualize and design your DB with a single query. { github.com }

Open-source database diagrams editor No installations • No Database password required.

image-20240825174357928

2024-08-30 Deep Live Cam: Real-Time Face Swapping and One-Click Video Deepfake Tool { deeplive.cam }

image-20240829192202923

WebDev

Charts

2024-08-26 Let’s Make A Bar Chart Tutorial | Vega { vega.github.io }

image-20240825174836815

CSS

2024-08-30 CSS Grid Areas { ishadeed.com }

image-20240829181923758

Keyboard / Game Pad

2024-08-26 jamiebuilds/tinykeys: A tiny (~650 B) & modern library for keybindings. { github.com }

A tiny (~650 B) & modern library for keybindings. See Demo

import { tinykeys } from "tinykeys" // Or `window.tinykeys` using the CDN version

tinykeys(window, {
"Shift+D": () => {
alert("The 'Shift' and 'd' keys were pressed at the same time")
},
"y e e t": () => {
alert("The keys 'y', 'e', 'e', and 't' were pressed in order")
},
"$mod+([0-9])": event => {
event.preventDefault()
alert(`Either 'Control+${event.key}' or 'Meta+${event.key}' were pressed`)
},
})

2024-08-30 alvaromontoro/gamecontroller.js: A JavaScript library that lets you handle, configure, and use gamepads and controllers on a browser, using the Gamepad API { github.com }

Styles

2024-08-26 Newspaper Style Design { codepen.io }

image-20240825175903485

JavaScript / DOM

2024-08-30 Patterns for Memory Efficient DOM Manipulation with Modern Vanilla JavaScript – Frontend Masters Boost { frontendmasters.com }

image-20240829184512754

This article focuses on optimizing DOM manipulation using modern vanilla JavaScript to enhance performance and reduce memory usage in web applications. Understanding and applying these low-level techniques can be crucial in scenarios where performance is a priority, such as in large projects like Visual Studio Code, which relies heavily on manual DOM manipulation for efficiency.

The article begins with an overview of the Document Object Model (DOM), explaining that it is a tree-like structure where each HTML element represents a node. The common DOM APIs like querySelector(), createElement(), and appendChild() are introduced, emphasizing that while frameworks like React or Angular abstract these details, knowing how to manipulate the DOM directly can lead to performance gains.

A significant point is the trade-off between using frameworks and manual DOM manipulation. While frameworks simplify development, they can also introduce performance overhead through unnecessary re-renders and excessive memory usage. The article argues that in performance-critical applications, direct DOM manipulation can prevent these issues by reducing the garbage collector's workload.

To optimize DOM manipulation, several tips are provided:

  • Hiding or showing elements is preferred over creating and destroying them dynamically. This approach keeps the DOM more static, leading to fewer garbage collection calls and reduced client-side logic complexity.
  • For example, instead of dynamically creating an element with JavaScript, it’s more efficient to toggle its visibility with classes (el.classList.add('show') or el.style.display = 'block').

Other techniques discussed include:

  • Using textContent instead of innerText for reading content from elements, as it is faster and avoids forcing a reflow.
  • insertAdjacentHTML is preferred over innerHTML because it inserts content without destroying existing DOM elements first.
  • For the fastest performance, the <template> tag combined with appendChild or insertAdjacentElement is recommended for creating and inserting new DOM elements efficiently.

The article also covers advanced techniques for managing memory:

  • WeakMap and WeakRef are used to avoid memory leaks by ensuring that references to DOM nodes are properly garbage collected when the nodes are removed from the DOM.
  • Proper cleanup of event listeners is emphasized, including methods like removeEventListener, using the once parameter, and employing event delegation to minimize the number of event listeners in dynamic components.

For handling multiple event listeners, the AbortController is introduced as a method to unbind groups of events easily. This can be particularly useful when needing to clean up or cancel multiple event listeners at once.

The article wraps up with profiling and debugging advice. It recommends using Chrome DevTools for memory profiling and JavaScript execution time analysis to ensure that DOM operations do not lead to performance bottlenecks or memory leaks.

"Efficient DOM manipulation isn’t just about using the right methods—it’s also about understanding when and how often you’re interacting with the DOM."

The key takeaway is that while frameworks provide convenience, understanding and utilizing these low-level DOM manipulation techniques can significantly enhance the performance of web applications, particularly in performance-sensitive scenarios.

TypeScript

2024-08-26 gruhn/typescript-sudoku: Playing Sudoku in TypeScript while the type checker highlights mistakes. { github.com }

image-20240825175100097

Markdown

2024-08-26 Getting Started | Milkdown { milkdown.dev }

image-20240825180829277

  • 📝 WYSIWYG Markdown - Write markdown in an elegant way
  • 🎨 Themable - Create your own theme and publish it as an npm package
  • 🎮 Hackable - Create your own plugin to support your awesome idea
  • 🦾 Reliable - Built on top of prosemirror and remark
  • Slash & Tooltip - Write faster than ever, enabled by a plugin.
  • 🧮 Math - LaTeX math equations support via math plugin
  • 📊 Table - Table support with fluent ui, via table plugin
  • 🍻 Collaborate - Shared editing support with yjs
  • 💾 Clipboard - Support copy and paste markdown, via clipboard plugin
  • 👍 Emoji - Support emoji shortcut and picker, via emoji plugin

SteamDeck

2024-08-30 mikeroyal/Steam-Deck-Guide: Steam Deck Guide. Learn all about the Tools, Accessories, Games, Emulators, and Gaming Tips that will make your Steam Deck an awesome Gaming Handheld or a Portable Computer Workstation. { github.com }

image-20240829191917324

Job interview Prep

2024-08-30 Visual Data Structures Cheat-Sheet - by Nick M { photonlines.substack.com }

image-20240829185934397

image-20240829190010682

image-20240829190043550

image-20240829190132236

Workplace

2024-08-30 The Science of Well-Being | Coursera { www.coursera.org }

image-20240829191217188

The Science of Well-Being course by Yale University challenges common assumptions about happiness and teaches evidence-based strategies for improving well-being.

It explains that external factors like wealth have less impact on long-term happiness than we often believe.

Hedonic adaptation shows that people quickly return to a baseline level of happiness after changes in their lives, highlighting the need for sustainable sources of well-being.

Practices like gratitude, mindfulness, and meditation are introduced to help shift focus and improve emotional regulation.

The course emphasizes the importance of social connections and forming healthy habits as key components of happiness.

2024-08-30 Your life, your volume | Loop Earplugs { www.loopearplugs.com }

Unfortunately, not a sponsored content. Seriously my colleague, Lisi, recommended these.

image-20240829182035762

Burnout

Burnout can manifest in different ways depending on the underlying causes. Here’s an expanded explanation of the two types of burnout mentioned:

1. Burnout from Boredom and Routine:

This type of burnout occurs when tasks become monotonous, and there’s a lack of challenge or variety in the work. Over time, this can lead to a sense of disengagement and apathy.

Tips to Mitigate This Type of Burnout:

  • Introduce Variety: Rotate tasks, take on new projects, or explore different aspects of your role to break the monotony.
  • Set Personal Goals: Establishing new challenges or learning opportunities can reinvigorate your sense of purpose.
  • Take Breaks: Step away from work periodically to reset your mind and come back with fresh energy.
  • Seek Feedback: Regularly ask for feedback to ensure you’re growing and improving in your role, which can make work more engaging.
  • Incorporate Creativity: Find ways to add a creative touch to your work, even in routine tasks, to make them more interesting.

2. Burnout from Too Many Changes and Uncertainty:

This type of burnout arises when there’s a constant state of flux, leading to stress and anxiety due to the unpredictability of work.

Tips to Mitigate This Type of Burnout:

  • Prioritize and Organize: Break down tasks into manageable steps and prioritize them to regain a sense of control.
  • Embrace Flexibility: Accept that change is inevitable and try to adapt by being flexible and open to new approaches.
  • Develop Coping Strategies: Practice stress-relief techniques like mindfulness, deep breathing, or exercise to manage anxiety.
  • Seek Support: Talk to colleagues, supervisors, or a professional about your concerns to gain perspective and support.
  • Focus on What You Can Control: Concentrate on aspects of your work where you can make an impact, rather than worrying about uncertainties beyond your control.

General Tips to Combat Burnout:

  • Maintain Work-Life Balance: Ensure you’re taking time for yourself outside of work to recharge.
  • Regular Exercise and Healthy Eating: Physical well-being can greatly influence mental health and resilience.
  • Limit Overtime: Avoid consistently working long hours, which can lead to exhaustion.
  • Take Vacations: Time away from work is crucial for long-term productivity and well-being.
  • Seek Professional Help: If burnout becomes overwhelming, don’t hesitate to consult with a mental health professional.

Personal Blogs

2024-08-26 Articles { codinghelmet.com }

Zoran Horvat

image-20240825181357093

· 18 min read

📚️ Good Reads

2024-06-16 ✏️ How to Build Anything Extremely Quickly - Learn How To Learn

found in programmingdigestOutline speedrunning algorithm:

  1. Make an outline of the project

  2. For each item in the outline, make an outline. Do this recursively until the items are small

  3. Fill in each item as fast as possible

    • You’ll get more momentum by speedrunning it, which feels great, and will make you even more productive

    • DO NOT PERFECT AS YOU GO. This is a huge and common mistake.

    • Finally, once completely done, go back and perfect

    • Color the title text, figure out if buttons should have 5% or 6% border radius, etc

    • Since you’re done, you’ll be less stressed, have a much clearer mind, and design your project better

    • And hey, you’ll enjoy the whole process more, and end up making more things over the long run, causing you to learn/grow more

image-20240806224912272

2024-06-18 A Long Guide to Giving a Short Academic Talk - Benjamin Noble

Anatomy of a Short Talk

Short academic talks tend to follow a standard format:

  • Motivation of the general idea. This can take the form of an illustrative example from the real world or it can highlight a puzzle or gap in the existing scholarship.
  • Ask the research question and preview your answer.
  • A few brief references to the literature you’re speaking to.
  • Your theoretical innovation.
  • An overview of the data underlying the result.
  • Descriptive statistics (if relevant).
  • (Maybe the statistical approach or model, but only if it’s something impressive and/or non-standard. The less Greek the better.)
  • Statistical results IN FIGURE FORM! No regression tables please.
  • Conclusion that restates your main finding. Then, briefly reference your other results (which you have in your appendix slides and would be happy to discuss further in Q&A), and highlight the broader implications of your research.

image-20240806225128203

2024-06-26 What's hidden behind "just implementation details" | nicole@web

Found in Programming Digest: Always Measure One Level Deeper

image-20240806225325353

2024-06-29 A Bunch of Programming Advice I’d Give To Myself 15 Years Ago - Marcus' Blog

If you (or your team) are shooting yourselves in the foot constantly, fix the gun

Regularly identify and fix recurring issues in your workflow or codebase to simplify processes and reduce errors. Don't wait for an onboarding or major overhaul to address these problems.

Assess the trade-off you’re making between quality and pace, make sure it’s appropriate for your context

Evaluate the balance between speed and correctness based on the project's impact and environment. In non-critical applications, prioritize faster shipping and quicker fixes over exhaustive testing.

Spending time sharpening the axe is almost always worth it

Invest time in becoming proficient with your tools and environment. Learn shortcuts, become a fast typist, and know your editor and OS well. This efficiency pays off in the long run.

If you can’t easily explain why something is difficult, then it’s incidental complexity, which is probably worth addressing

Simplify or refactor complex code that can't be easily explained. This reduces future maintenance and makes your system more robust.

Try to solve bugs one layer deeper

Address the root cause of bugs rather than applying superficial fixes. This approach results in a cleaner, more maintainable system.

Don’t underestimate the value of digging into history to investigate some bugs

Use version control history to trace the origin of bugs. Tools like git bisect can be invaluable for pinpointing changes that introduced issues.

Bad code gives you feedback, perfect code doesn’t. Err on the side of writing bad code

Write code quickly to get feedback, even if it’s not perfect. This helps you learn where to focus your efforts and improves overall productivity.

Make debugging easier

Implement debugging aids such as user data replication, detailed tracing, and state debugging. These tools streamline the debugging process and reduce time spent on issues.

When working on a team, you should usually ask the question

Don’t hesitate to ask more experienced colleagues for help. It’s often more efficient than struggling alone and fosters a collaborative environment.

Shipping cadence matters a lot. Think hard about what will get you shipping quickly and often

Optimize your workflow to ensure frequent and fast releases. Simplify processes, use reusable patterns, and maintain a system free of excessive bugs to improve shipping speed.

2024-06-30 How Does Facebook Manage to Serve Billions of Users Daily?

Found in 2024-06-30 Programming Digest: The Itanic Saga

You might be wondering, “Well, can’t we just query the database to get the posts that should be shown in the feed of a user?”. Of course, we can – but it won’t be fast enough. The database is more like a warehouse, where the data is stored in a structured way. It’s optimized for storing and retrieving data, but not for serving data fast.

The cache is more like a shelf, where the data is stored in a way that it can be retrieved quickly.

2024-07-15 How To Know When It's Time To Go

Found in 2024-07-15 Ten Years with Microservices :: Programming Digest

I retired in 2021 after 40 years as a programmer, not because I couldn't keep up but because I lost interest. Careers evolve, and everyone eventually reaches a point where they can no longer continue as they have. This isn't just about retirement; it can happen anytime. Some people become obsolete due to outdated technology, lose passion, or are forced out by market changes.

Sustaining a long programming career is challenging due to rapid technological shifts. Many of my peers either moved into management or became obsolete. It's essential to be honest with yourself about your ability to keep up and your job satisfaction. Sometimes, leaving programming or transitioning to a different field can bring greater fulfillment.

"Are you keeping up to date sufficiently to continue the job? Is the job even interesting anymore, or is there something else you would rather do?"

Making informed career decisions is crucial. Age and ability are not necessarily correlated, and personal fulfillment should take priority over financial reasons. Even in retirement, I continue to write code for my generative art practice, finding joy in the complexity and creativity it offers.

"Programming can be a fun career, a horrible nightmare, or something in between, and it never stands still."

Evaluate your career honestly, be open to change, and explore new opportunities when the current path no longer suits you.

2024-07-18 ‼️ Panic! at the Tech Job Market ‼️

Warning! This post is too long, but pleasant to read. I actually used Microsoft Edge TTS to read it and spent 2 good hours.

“I have the two qualities you require to see absolute truth: I am brilliant and unloved.”

"By the power of drawing two lines, we see correlation is causation and you can’t argue otherwise: interest rates go up, jobs go down."

"Nepo companies are the most frustrating because they suck up all the media attention for being outsized celebrity driven fads."

"Initial growth companies are the worst combination of high-risk, low-reward effort-vs-compensation tradeoffs."

"Modern tech hiring... has become a game divorced from meaningfully judging individual experience and impact."

"You must always open your brain live in front of people to dump out immediate answer to a series of pointless problems."

"Your job is physically impossible. You will always feel drained and incompetent because you can’t actually do everything everyday."

"AWS isn’t hands off 'zero-experience needed magic cloud'; AWS is actually 'datacenter as a service.'"

"The company thought they had 10,000 users per day... but my internal metrics showed only 300 users per day actually used the backend APIs."

"Most interview processes don’t even consider a person’s actual work and experience and capability."

"At some point, a switch flipped in the tech job market and 'programmer jobs' just turned into zero-agency task-by-task roles working on other people’s ideas under other people’s priorities to accomplish other people’s goals."

🎯 How the things work?

2024-07-15 How SQL Query works? SQL Query Execution Order for Tech Interview - DEV Community

Found in 2024-07-15 Ten Years with Microservices :: Programming Digest

image-20240806225527901

📢 Good Talks

2024-07-13 What you can learn from an open-source project with 300 million downloads - Dennis Doomen - YouTube

image-20240806225712599

Best Practices for Maintaining Fluent Assertions and Efficient Project Development

This talk covers effective techniques and tools for maintaining fluent assertions and managing development projects efficiently. It explores the use of GitHub for version control, emphasizing templates, change logs, and semantic versioning. The speaker also shares insights on tools like Slack, GitKraken, PowerShell, and more, highlighting their roles in streamlining workflows, ensuring code quality, and enhancing collaboration. Ideal for developers and project managers aiming to optimize their development processes and maintain high standards in their projects.

Tools discussed:

Project Management and Collaboration Tools

GitHub: GitHub hosts repositories, tracks issues, and integrates with various tools for maintaining projects. It supports version control and collaboration on code, providing features like pull requests, branch management, and GitHub Actions for CI/CD. Example output: Issues, pull requests, repository branches.

Development and Scripting Tools

Windows Terminal: Windows Terminal integrates various command-line interfaces like PowerShell and Bash into a single application, allowing for a seamless command-line experience. Example output: Command outputs from PowerShell, CMD, and Bash.

PowerShell: PowerShell is a scripting and automation framework from Microsoft, offering a command-line shell and scripting language for system management and automation tasks. Example output: Script execution results, system management tasks.

PSReadLine: PSReadLine enhances the PowerShell command-line experience with features like syntax highlighting, history, and better keyboard navigation. Example output: Enhanced command history navigation, syntax-highlighted command input.

vors/ZLocation: ZLocation: Z Location is a command-line tool that allows quick navigation to frequently accessed directories by typing partial directory names. Example output: Instantly switching to a frequently used directory.

Git and Version Control Tools

GitHub Flow Like a Pro with these 13 Git Aliases | You’ve Been Haacked: Git Extensions/Aliases simplify Git command-line usage by providing shorthand commands and scripts to streamline common Git tasks. Example output: Simplified Git commands like git lg for a condensed log view.

GitKraken: GitKraken is a graphical interface for Git that provides a visual overview of your repository, including branches, commits, and merges, making it easier to manage complex Git workflows. Example output: Visual representation of branch history and commit graphs.

JetBrains Rider: JetBrains Rider is an IDE specifically designed for .NET development, providing advanced coding assistance, refactoring, and debugging features to enhance productivity. Example output: Code completion suggestions, integrated debugging sessions.

Code Quality and Formatting Tools

EditorConfig: EditorConfig helps maintain consistent coding styles across different editors and IDEs by defining coding conventions in a simple configuration file. Example output: Automatically formatted code based on .editorconfig settings.

Sergio0694/PolySharp: PolySharp allows the use of newer C# syntax features in older .NET versions, enabling modern coding practices in legacy projects. Example output: Code using new C# syntax features in older .NET environments.

Build and Deployment Tools

Nuke: Nuke is a build automation system for .NET that uses C# for defining build steps and pipelines, providing flexibility and type safety. Example output: Automated build and deployment steps written in C#.

GitVersion: GitVersion generates version numbers based on Git history, branch names, and tags, ensuring consistent and semantically correct versioning. Example output: Semantic version numbers automatically updated in the project.

Dependency Management and Security Tools

Dependabot: Dependabot automatically scans repositories for outdated dependencies and creates pull requests to update them, helping to keep dependencies up to date and secure. Example output: Pull requests for dependency updates with detailed change logs.

CodeQL: CodeQL is a code analysis tool integrated with GitHub that scans code for security vulnerabilities and other issues, providing detailed reports and alerts. Example output: Security alerts and code scanning reports.

Testing and Benchmarking Tools

Stryker.NET: Stryker.NET is a mutation testing tool for .NET that modifies code to check if tests detect the changes, ensuring comprehensive test coverage. Example output: Mutation testing reports showing test effectiveness.

ArchUnit: ArchUnit checks architecture rules in Java projects, ensuring that dependencies and structure conform to specified rules. (Similar tools exist for .NET). Example output: Reports on architecture rule violations.

Documentation Tools

Docusaurus: Docusaurus helps build project documentation websites easily, providing a platform for creating and maintaining interactive, static documentation. Example output: Interactive documentation websites generated from markdown files.

Miscellaneous Tools

CSpell: CSpell is an NPM package used for spell checking in code projects, ensuring textual accuracy in code comments, strings, and documentation. Example output: Spell check reports highlighting errors and suggestions.

2024-07-14 Failure & Change: Principles of Reliable Systems • Mark Hibberd • YOW! 2018 - YouTube

image-20240806225909552

Mark Hibberd's talk "Failure & Change: Principles of Reliable Systems" at YOW! 2018 explores building and operating reliable software systems, focusing on understanding and managing failures in complex and large-scale systems.

Reliability is defined as consistently performing well. Using airline engines as an example, Hibberd illustrates how opting for fewer engines can sometimes be safer due to lower failure probability and fewer knock-on effects. The key is to control the scope and consequences of failures.

"We need to be resilient to failure by controlling the scope and consequences of our failure."

Redundancy and independence are crucial. Redundancy should be managed carefully to maintain reliability, avoiding tightly coupled systems where a single failure can cascade into multiple failures. Service granularity helps manage failures effectively by breaking down systems into smaller, independent services, each handling specific responsibilities and passing values around to maintain independence.

"Service granularity gives us this opportunity to trade the likelihood of a failure for the consequences of a failure."

In operations, it's essential to implement health checks and monitoring to detect failures early and route around them aggressively to prevent overload and cascading failures. Using circuit breakers to cut off communication to failing services allows them to recover.

Designing systems with independent services is key. Services should operate independently, using shared values rather than shared states or dependencies. For example, an online chess service can be broken down into services for pairing, playing, history, and analysis, each maintaining independence.

Operational strategies include implementing timeouts and retries to handle slow responses and prevent overloads, and deploying new versions gradually to test against real traffic and verify responses. Proxies can interact with unreliable code to maintain a reliable view of data.

"Timeouts are so important that we probably should have some sort of government-sponsored public service announcement."

Handling change in complex systems involves accommodating changes without significant disruptions through continuous deployment and rolling updates. Techniques like in-production verification and routing requests to both old and new versions during deployment help ensure reliability.

Data management is also crucial. Separating data storage from application logic helps maintain reliability during changes. Avoid coupling data handling directly with services to facilitate easier updates and rollbacks.

"We want to create situations where we can gracefully roll things out and flatten out this time dimension."

Hibberd emphasizes making informed trade-offs in architecture, redundancy, and granularity to enhance the reliability of software systems. Continuous monitoring, strategic failure handling, and incremental deployment are essential to ensure systems remain resilient and reliable despite inevitable failures and changes.

🤖 The Era of AI

2024-07-01 The limitations of LLMs, or why are we doing RAG? | EDB

image-20240806230126557

Despite powerful capabilities with many tasks, Large Language Models (LLMs) are not know-it-alls. If you've used ChatGPT or other models, you'll have experienced how they can’t reasonably answer questions about proprietary information. What’s worse, it isn’t just that they don't know about proprietary information, they are unaware of their own limitations and, even if they were aware, they don’t have access to proprietary information. That's where options like Retrieval Augmented Generation (RAG) come in and give LLMs the ability to incorporate new and proprietary information into their answers.

2024-06-18 What Is ChatGPT Doing … and Why Does It Work?—Stephen Wolfram Writings { writings.stephenwolfram.com }

image-20240806230337671

It’s Just Adding One Word at a Time That ChatGPT can automatically generate something that reads even superficially like human-written text is remarkable, and unexpected. But how does it do it? And why does it work? My purpose here is to give a rough outline of what’s going on inside ChatGPT—and then to explore why it is that it can do so well in producing what we might consider to be meaningful text. I should say at the outset that I’m going to focus on the big picture of what’s going on—and while I’ll mention some engineering details, I won’t get deeply into them. (And the essence of what I’ll say applies just as well to other current “large language models” [LLMs] as to ChatGPT.)

The first thing to explain is that what ChatGPT is always fundamentally trying to do is to produce a “reasonable continuation” of whatever text it’s got so far, where by “reasonable” we mean “what one might expect someone to write after seeing what people have written on billions of webpages, etc.”

2024-06-22 Practical Applications of Generative AI: How to Sprinkle a Little AI in Your App - Phil Haack - YouTube

Be Positive

  • ✅ Do this: "Explain how to implement a sorting algorithm."
  • ❌ Don't do this: "Don't talk about unrelated algorithms."
  • Example: Nike was on the right track when they said, "Just do it." Telling a prompt what not to do can lead it to do just that.

Give the Model an Out

  • ✅ Do this: "If you don't know the answer, it's okay to say 'I don't know.'"
  • ❌ Don't do this: "You must provide an answer for every question."
  • Let the model say 'I don’t know' to reduce hallucinations.

Break Complex Tasks into Subtasks

  • ✅ Do this: "Write three statements for and against using AI in education. Then use those statements to write an essay."
  • ❌ Don't do this: "Write an essay on AI in education."
  • Example: For an essay, ask the AI to write three statements for and against a point. Then have it use those statements to write the essay.

Ask for Its Chain of Thought

  • ✅ Do this: "Explain why you think using AI can improve customer service."
  • ❌ Don't do this: "Just tell me how AI can improve customer service without any explanation."
  • Ask it to explain its reasoning. Lately, it seems GPT-4 does this automatically.

Check the Model’s Comprehension

  • ✅ Do this: "Do you understand the task of generating a summary of this article?"
  • ❌ Don't do this: "Summarize this article without confirming if you understood the task."
  • "Do you understand the task?"

Links

2024-07-31 Building A Generative AI Platform

image-20240806230638016

(found in 2024-07-31 Programming Digest)

After studying how companies deploy generative AI applications, I noticed many similarities in their platforms. This post outlines the common components of a generative AI platform, what they do, and how they are implemented. I try my best to keep the architecture general, but certain applications might deviate. This is what the overall architecture looks like.

2024-08-05 📌 How I Use "AI" (nicholas.carlini.com)

image-20240806230826595

  • To build complete applications for me
  • As a tutor for new technologies
  • To get started with new projects
  • To simplify code
  • For monotonous tasks
  • To make every user a "power user"
  • As an API reference