Short Month, Big Ideas (February 2026 Wallpapers Edition)

Sometimes, the best inspiration lies right in front of us. With that in mind, we embarked on our wallpapers adventure more than 14 years ago. The idea: to provide you with a new collection of unique and inspiring desktop wallpapers every month. This February is no exception, of course.

For this post, artists and designers from across the globe once again got their ideas flowing and designed wallpapers to bring some good vibes to your desktops and home screens. All of them come in a variety of screen resolutions and can be downloaded for free. A huge thank-you to everyone who shared their design with us this month — this post wouldn’t exist without your kind support!

If you too would like to get featured in one of our next wallpapers posts, please don’t hesitate to submit your design. We are always looking for creative talent and can’t wait to see your story come to life!

  • You can click on every image to see a larger preview.
  • We respect and carefully consider the ideas and motivation behind each and every artist’s work. This is why we give all artists the full freedom to explore their creativity and express emotions and experience through their works. This is also why the themes of the wallpapers weren’t anyhow influenced by us but rather designed from scratch by the artists themselves.

Eternal Tech

“The first one being older than 100 years, radio is still connecting people, places, and events.” — Designed by Ginger It Solutions from Serbia.

  • preview
  • with calendar: 320×480, 640×480, 800×480, 800×600, 1024×768, 1024×1024, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1020, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440
  • without calendar: 320×480, 640×480, 800×480, 800×600, 1024×768, 1024×1024, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1020, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440

Coffee Break

Designed by Ricardo Gimenes from Spain.

  • preview
  • with calendar: 640×480, 800×480, 800×600, 1024×768, 1024×1024, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1366×768, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440, 3840×2160
  • without calendar: 640×480, 800×480, 800×600, 1024×768, 1024×1024, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1366×768, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440, 3840×2160

Mosa-hic

“Small colored squares make me think of a mosaic, but the squares are not precisely tiled, so I call it a ‘mosa-hic,’ like the ‘hic hic’ sound someone makes when they’ve had a bit too much to drink.” — Designed by Philippe Brouard from France.

  • preview
  • with calendar: 1024×768, 1366×768, 1600×1200, 1920×1080, 1920×1200, 2560×1440, 2560×1600, 2880×1800, 3840×2160
  • without calendar: 1024×768, 1366×768, 1600×1200, 1920×1080, 1920×1200, 2560×1440, 2560×1600, 2880×1800, 3840×2160

Search Within

“I used the search-bar metaphor to reflect a daily habit and transform it into a moment of introspection, reminding myself to pause and look inward.” — Designed by Hitesh Puri from India, Delhi.

  • preview
  • with calendar: 430×932, 1024×1024, 1280×800, 1280×960, 1280×1024, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440
  • without calendar: 430×932, 1024×1024, 1280×800, 1280×960, 1280×1024, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440

The Lighthouse Of Mystery

“We continue the film saga. This time, we go to the mysterious Shutter Island, a lighthouse with many mysteries that will absorb you.” — Designed by Veronica Valenzuela from Spain.

  • preview
  • with calendar: 640×480, 800×480, 1024×768, 1280×720, 1280×800, 1440×900, 1600×1200, 1920×1080, 1920×1440, 2560×1440
  • without calendar: 640×480, 800×480, 1024×768, 1280×720, 1280×800, 1440×900, 1600×1200, 1920×1080, 1920×1440, 2560×1440

That’s All Folks

Designed by Ricardo Gimenes from Spain.

  • preview
  • with calendar: 640×480, 800×480, 800×600, 1024×768, 1024×1024, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1366×768, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440, 3840×2160
  • without calendar: 640×480, 800×480, 800×600, 1024×768, 1024×1024, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1366×768, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440, 3840×2160

Fall In Love With Yourself

“We dedicate February to Frida Kahlo to illuminate the world with color. Fall in love with yourself, with life, and then with whoever you want.” — Designed by Veronica Valenzuela from Spain.

  • preview
  • without calendar: 640×480, 800×480, 1024×768, 1280×720, 1280×800, 1440×900, 1600×1200, 1920×1080, 1920×1440, 2560×1440

Plants

“I wanted to draw some very cozy place, both realistic and cartoonish, filled with little details. A space with a slightly unreal atmosphere that some great shops or cafes have. A mix of plants, books, bottles, and shelves seemed like a perfect fit. I must admit, it took longer to draw than most of my other pictures! But it was totally worth it. Watch the making-of.” — Designed by Vlad Gerasimov from Georgia.

  • preview
  • without calendar: 800×480, 800×600, 1024×600, 1024×768, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1366×768, 1400×1050, 1440×900, 1440×960, 1600×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440, 2560×1600, 2880×1800, 3072×1920, 3840×2160, 5120×2880

Farewell, Winter

“Although I love winter (mostly because of the fun winter sports), there are other great activities ahead. Thanks, winter, and see you next year!” — Designed by Igor Izhik from Canada.

  • preview
  • without calendar: 1024×768, 1024×1024, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440, 2560×1600

Love Is In The Play

“Forget Lady and the Tramp and their spaghetti kiss, ’cause Snowflake and Cloudy are enjoying their bliss. The cold and chilly February weather made our kitties knit themselves a sweater. Knitting and playing, the kitties tangled in the yarn and fell in love in your neighbor’s barn.” — Designed by PopArt Studio from Serbia.

  • preview
  • without calendar: 320×480, 640×480, 800×480, 800×600, 1024×768, 1024×1024, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1366×768, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440

True Love

Designed by Ricardo Gimenes from Spain.

  • preview
  • without calendar: 640×480, 800×480, 800×600, 1024×768, 1024×1024, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1366×768, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440, 3840×2160

February Ferns

Designed by Nathalie Ouederni from France.

  • preview
  • without calendar: 320×480, 1024×768, 1280×1024, 1440×900, 1680×1200, 1920×1200, 2560×1440

The Great Beyond

Designed by Lars Pauwels from Belgium.

  • preview
  • without calendar: 800×600, 1024×768, 1024×1024, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1366×768, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440

It’s A Cupcake Kind Of Day

“Sprinkles are fun, festive, and filled with love… especially when topped on a cupcake! Everyone is creative in their own unique way, so why not try baking some cupcakes and decorating them for your sweetie this month? Something homemade, like a cupcake or DIY craft, is always a sweet gesture.” — Designed by Artsy Cupcake from the United States.

  • preview
  • without calendar: 320×480, 640×480, 800×600, 1024×768, 1152×864, 1280×800, 1280×1024, 1366×768, 1440×900, 1600×1200, 1680×1200, 1920×1200, 1920×1440, 2560×1440

Mochi

Designed by Ricardo Gimenes from Spain.

  • preview
  • without calendar: 640×480, 800×480, 800×600, 1024×768, 1024×1024, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1366×768, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440, 3840×2160

Balloons

Designed by Xenia Latii from Germany.

  • preview
  • without calendar: 320×480, 640×480, 800×480, 800×600, 1024×768, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1366×768, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440

Magic Of Music

Designed by Vlad Gerasimov from Georgia.

  • preview
  • without calendar: 800×480, 800×600, 1024×600, 1024×768, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1366×768, 1400×1050, 1440×900, 1440×960, 1600×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440, 2560×1600, 2880×1800, 3072×1920, 3840×2160, 5120×2880

Love Angel Vader

“Valentine’s Day is coming? Noooooooooooo!” — Designed by Ricardo Gimenes from Spain.

  • preview
  • without calendar: 320×480, 640×960, 1024×768, 1024×1024, 1280×800, 1280×960, 1280×1024, 1366×768, 1400×1050, 1440×900, 1600×1050, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440, 2880×1800

Principles Of Good Design

“The simplicity seen in the work of Dieter Rams which has ensured his designs from the 50s and 60s still hold a strong appeal.” — Designed by Vinu Chaitanya from India.

  • preview
  • without calendar: 320×480, 640×480, 800×480, 800×600, 1024×1024, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440

Time Thief

“Who has stolen our time? Maybe the time thief, so be sure to enjoy the other 28 days of February.” — Designed by Colorsfera from Spain.

  • preview
  • without calendar: 320×480, 640×480, 800×480, 800×600, 1024×768, 1024×1024, 1152×864, 1260×1440, 1280×720, 1280×800, 1280×960, 1280×1024, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440

Dark Temptation

“A dark romantic feel, walking through the city on a dark and rainy night.” — Designed by Matthew Talebi from the United States.

  • preview
  • without calendar: 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440

The Bathman

Designed by Ricardo Gimenes from Spain.

  • preview
  • without calendar: 640×480, 800×480, 800×600, 1024×768, 1024×1024, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1366×768, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440, 3840×2160

Snowy Sunset

Designed by Nathalie Croze from France.

  • preview
  • without calendar: 320×480, 1680×1050, 1920×1080, 2560×1440

Like The Cold Side Of A Pillow

Designed by Sarah Tanner from the United States.

  • preview
  • without calendar: 800×600, 1024×768, 1024×1024, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440

Ice Cream Love

“My inspiration for this wallpaper is the biggest love someone can have in life: the love for ice cream!” — Designed by Zlatina Petrova from Bulgaria.

  • preview
  • without calendar: 320×480, 640×480, 800×480, 800×600, 1024×768, 1024×1024, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440

Febrewery

“I live in Madison, WI, which is famous for its breweries. Wisconsin even named their baseball team “The Brewers.” If you like beer, brats, and lots of cheese, it’s the place for you!” — Designed by Danny Gugger from the United States.

  • preview
  • without calendar: 320×480, 1020×768, 1280×800, 1280×1024, 1136×640, 2560×1440

Share The Same Orbit!

“I prepared a simple and chill layout design for February called ‘Share The Same Orbit!’ which suggests to share the love orbit.” — Designed by Valentin Keleti from Romania.

  • preview
  • without calendar: 320×480, 640×480, 800×480, 800×600, 1024×768, 1024×1024, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1366×768, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440

Febpurrary

“I was doodling pictures of my cat one day and decided I could turn it into a fun wallpaper — because a cold, winter night in February is the perfect time for staying in and cuddling with your cat, your significant other, or both!” — Designed by Angelia DiAntonio from Ohio, USA.

  • preview
  • without calendar: 320×480, 800×480, 1024×768, 1024×1024, 1152×864, 1280×720, 1280×1024, 1366×768, 1400×1050, 1440×900, 1600×1200, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440

Snow

Designed by Elise Vanoorbeek from Belgium.

  • preview
  • without calendar: 1024×768, 1152×864, 1280×720, 1280×800, 1280×960, 1440×900, 1600×1200, 1680×1050, 1920×1080, 1920×1200, 1920×1440, 2560×1440, 1366×768, 2880×1800

Dog Year Ahead

Designed by PopArt Studio from Serbia.

  • preview
  • without calendar: 320×480, 640×480, 800×480, 800×600, 1024×768, 1024×1024, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1366×768, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440

French Fries

Designed by Doreen Bethge from Germany.

  • preview
  • without calendar: 320×480, 640×480, 800×480, 800×600, 1024×768, 1024×1024, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1366×768, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440

“Greben” Icebreaker

“Danube is Europe’s second largest river, connecting ten different countries. In these cold days, when ice paralyzes rivers and closes waterways, a small but brave icebreaker called Greben (Serbian word for ‘reef’) seems stronger than winter. It cuts through the ice on Đerdap gorge (Iron Gate) — the longest and biggest gorge in Europe — thus helping the production of electricity in the power plant. This is our way to give thanks to Greben!” — Designed by PopArt Studio from Serbia.

  • preview
  • without calendar: 320×480, 640×480, 800×480, 800×600, 1024×768, 1024×1024, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1366×768, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440

Out There, There’s Someone Like You

“I am a true believer that out there in this world there is another person who is just like us, the problem is to find her/him.” — Designed by Maria Keller from Mexico.

  • preview
  • without calendar: 320×480, 640×480, 640×1136, 750×1334, 800×480, 800×600, 1024×768, 1024×1024, 1152×864, 1242×2208, 1280×720, 1280×800, 1280×960, 1280×1024, 1366×768, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440, 2880×1800

On The Light Side

Designed by Ricardo Gimenes from Spain.

  • preview
  • without calendar: 640×480, 800×480, 800×600, 1024×768, 1024×1024, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1366×768, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440, 3840×2160

Get Featured Next Month

Feeling inspired? We’ll publish the March wallpapers on February 28, so if you’d like to be a part of the collection, please don’t hesitate to submit your design. We are already looking forward to it!

Cellular automata that mimics reality

    There are many things that cellular automatas have many uses but for this post I will detail now as I said in one of my other blog posts titled "..." is to model populations. I will be talking about modeling human populations.
    As humans if we are alive, we need to have at the very least 1 companion, it's just our nature. But some people are also introverted or extroverted so there could be an rng value from 3 to 9 to mimic this. This is what that would look like:
            If ((item (I) of (Grid#)) = 1)
                    If (n ( > 0 ) and ( n < rnd(3 to 9)))
                            add 1 to (Grid)
                    else
                            add 0 to (Grid)
            else
                    If (n = 2)
                            add 1 to (Grid)
                    else
                            add 0 to (Grid)
    This will give semi interesting results but eventually it just collapses into a fixed state. The next one interests me more because of the fact that It mimics slime mold witch is a kind of mold that slowly grows in the shape of a slime like form, the image above is an example of the algorithm. This rule set makes it so that no sell dies when it is alive and to be created a cell has only 3 live neighbors. The code looks like this:
            If ((item (I) of (Grid#)) = 1)
                    add 1 to (Grid)
            else
                    if (n = 3)
                            add 1 to (Grid)
                    else
                            add 0 to (Grid)
    That simple of code makes something that is truly captivating to watch grow overtime. There really isn't anything more to say. Thanks for reading, have a great day :)

Time Blocking System in Appointment Booking

1. What is a Time Blocking System?

A time blocking system is an appointment management approach where:

  • Each booking stores a start time and end time
  • This time interval is considered occupied
  • No other appointment is allowed to overlap with this interval

In simple words:

Once a service is booked for a time range, that time range is blocked and cannot be reused.

This system models real-world resource usage, where a vendor can serve only one customer at a time.

2. How Time Blocking Works (Conceptual)

  1. Vendor defines working hours (availability)
  2. Customer selects a service (with fixed duration)
  3. System checks existing bookings
  4. If the new booking overlaps with any existing booking, it is rejected
  5. If no overlap exists, the booking is confirmed

Thus, time is managed as continuous intervals, not as fixed predefined slots.

3. Why Time Blocking is Needed

3.1 Real-World Accuracy

In real life:

  • A salon chair
  • A doctor
  • A consultant

All can handle only one service at a time.

Time blocking mirrors this reality by ensuring that:

  • One service fully occupies a time window
  • Partial overlaps are not allowed

3.2 Supports Variable Service Durations

Different services take different amounts of time.

Example:

  • Haircut → 30 minutes
  • Hair coloring → 90 minutes

Time blocking allows services of any duration without modifying the system.

3.3 Prevents Double Booking

Because every booking blocks a time range:

  • Two customers cannot book overlapping appointments
  • Data integrity is maintained automatically

This eliminates race conditions in basic systems.

4. Comparison with Other Appointment Systems

4.1 Fixed Slot-Based System

How it works:

  • Day is divided into equal time slots (e.g., 30 minutes)
  • Appointments occupy one or more slots

Limitations:

  • Inefficient for long services
  • Requires slot merging logic
  • Wastes unused time
  • Difficult to manage variable durations

Why Time Blocking is Better:

  • No slot merging required
  • Exact service duration is used
  • Less wasted time

4.2 Pre-Stored Slot System

How it works:

  • Slots are generated and stored in advance in the database

Limitations:

  • High storage usage
  • Needs regeneration if schedule changes
  • Complex maintenance

Why Time Blocking is Better:

  • No need to store slots
  • Slots are calculated dynamically
  • System remains flexible

4.3 Queue-Based System

How it works:

  • Customers wait in order
  • No fixed time is guaranteed

Limitations:

  • Poor user experience
  • Unpredictable waiting times
  • Not suitable for scheduled services

Why Time Blocking is Better:

  • Provides certainty to customers
  • Better time management for vendors

4.4 Vendor Approval-Based System

How it works:

  • Vendor manually approves each booking

Limitations:

  • Slow confirmation
  • Increased operational load
  • Poor scalability

Why Time Blocking is Better:

  • Automated conflict checking
  • Instant confirmation
  • Scales better

5. Technical Advantages of Time Blocking

5.1 Simple Overlap Logic

Conflict detection uses a single condition:

Two appointments conflict if their time ranges overlap.

This makes implementation:

  • Simple
  • Efficient
  • Reliable

5.2 Database-Friendly Design

Only one table (Booking) is required to manage conflicts.

  • No redundant data
  • Easy indexing
  • Easy querying

5.3 Scalability

Time blocking works equally well for:

  • 1 vendor
  • 1,000 vendors
  • Multiple service durations

No redesign is needed as the system grows.

6. Why Time Blocking is the Best Choice for This Application

  • Reflects real-world service behavior
  • Handles variable service durations naturally
  • Avoids unnecessary database complexity
  • Prevents double booking effectively
  • Easy to explain, implement, and maintain

For an application like Justdial + Appointment Booking, time blocking provides the best balance between:

  • realism
  • simplicity
  • scalability

Anaconda Distribution in Development

 Introduction

Python is the most popular programming language in the field of Artificial Intelligence (AI), especially in areas such as Machine Learning (ML) and Natural Language Processing (NLP). However, writing AI programs requires more than just the Python language. Developers need many external libraries, tools, and a stable development environment.

Anaconda Distribution was created to solve these problems by providing an all-in-one platform for Python-based scientific and AI development. This article explains what Anaconda is, why it is important, its advantages and disadvantages, and a focused comparison between Pure Python and Anaconda Distribution in AI development.

Importance of Anaconda Distribution

AI development involves handling large datasets, mathematical computations, model training, and visualization. Installing and managing all required libraries manually can be difficult, especially for beginners.

Anaconda is important because it:

  • Simplifies Python setup for AI and data science

  • Reduces dependency and version conflicts

  • Provides a ready-to-use environment for ML and NLP

  • Saves time during project setup

Because of these reasons, Anaconda is widely used in universities, research labs, and AI-based industries.

Advantages of Anaconda Distribution
1. Pre-installed AI Libraries

Anaconda includes most libraries required for AI development, such as:

  • NumPy

  • Pandas

  • Matplotlib

  • Scikit-learn

  • SciPy

This makes it ideal for ML and NLP beginners.

2. Easy Environment Management

Anaconda uses Conda environments, allowing developers to create isolated environments for different projects. This prevents library version conflicts, which are common in AI projects.

3. Beginner-Friendly

With tools like Anaconda Navigator and Jupyter Notebook, Anaconda is very easy to use, even for users with minimal technical experience.

4. Stable for AI Workloads

AI models often depend on specific versions of libraries. Anaconda ensures better compatibility and stability compared to manual installations.

Disadvantages of Anaconda Distribution

  1. Large Disk Space Usage

Anaconda requires several gigabytes of disk space, which may not be ideal for low-storage systems.

  1. Slower Package Installation

Conda package installation can be slower than pip in some cases.

  1. Not Always Necessary

For small Python or lightweight applications, Anaconda may feel heavy and unnecessary.

Pure Python vs Anaconda Distribution in AI Development

This comparison focuses only on AI-related development, especially Machine Learning (ML) and Natural Language Processing (NLP).

1. Setup and Installation

Pure Python

  • Requires manual installation of Python

  • AI libraries must be installed one by one using pip

  • Higher chance of dependency errors

Anaconda Distribution

  • Python and AI libraries come pre-installed

  • Minimal setup required

  • Very beginner-friendly

✔ Anaconda is better for faster AI setup.

2. Library Management

Pure Python

  • Uses pip for package installation

  • Dependency conflicts are common in ML projects

Anaconda Distribution

  • Uses conda, which manages both packages and environments

  • Better handling of complex AI dependencies

✔ Anaconda is more reliable for ML and NLP projects.

3. Environment Isolation

Pure Python

  • Virtual environments must be created manually

  • Beginners often misuse or skip them

Anaconda Distribution

  • Environment creation is simple and well-integrated

  • Ideal for running multiple AI experiments

✔ Anaconda provides safer AI experimentation.

4. Performance in AI Tasks

Pure Python

  • Performance depends heavily on correct library versions

  • Misconfigured environments can slow down development

Anaconda Distribution

  • Optimized libraries for numerical and scientific computing

  • More stable for training ML models

✔ Anaconda offers more consistent performance.

5. Suitability for ML and NLP

Pure Python

  • Suitable for experienced developers

  • Better for custom or lightweight AI systems

Anaconda Distribution

  • Best choice for students, researchers, and AI beginners

  • Commonly used in ML, NLP, and data science education

✔ Anaconda is more suitable for learning and academic AI development.

Conclusion

Anaconda Distribution plays a vital role in modern AI development by simplifying Python setup and library management. It is especially valuable in Machine Learning and Natural Language Processing, where multiple libraries and stable environments are required.

While Pure Python offers flexibility and lighter installations, it demands more technical knowledge and manual management. Anaconda Distribution, on the other hand, provides a complete and beginner-friendly ecosystem that accelerates AI development and learning.

For students, beginners, and researchers working in ML and NLP, Anaconda Distribution is the better choice. Pure Python is more suitable for advanced developers who require full control over their development environment.

Ignacia Portfolio engine V!

This is a submission for the New Year, New You Portfolio Challenge Presented by Google AI

About Me

<!– Tell us a bit about yourself and what you hope to express with your portfolio. –> Based in Gloucester, Massachusetts, I am Ignacia Heyer—a creative technologist, Web3 entrepreneur, and wellness-driven builder.

My work is centered at the intersection of AI, rapid prototyping, brand identity, and decentralized digital experiences. My passion is transforming abstract ideas into clean, scalable systems that are intentional, human-centered, and focused on the future.

This portfolio is not a static website; it is a living, generative system that evolves alongside me. Through it, I aim to convey three key aspects:

  1. Identity: A digital presence that genuinely reflects my voice, energy, and vision.
  2. Clarity: A clear demonstration of who I am, what I build, and the value I deliver.
  3. Momentum: An expression of the speed and creativity I apply to the prototyping process.

Portfolio

<!– Embed your Cloud Run deployment here using the instructions from https://dev.to/devteam/you-can-now-embed-cloud-run-deployments-directly-in-your-dev-posts-1jk8
Don’t forget to include the dev label! –>curl -X POST “https://ignacia-portfolio-engine-v1-473525287303.us-west1.run.app/generate”
-H “Content-Type: application/json”
-d ‘{“task_type”: “one_page_pdf”}’

How I Built It

<!– Share your tech stack, design decisions, and development process. What Google AI tools did you use? –> I restructured my portfolio as an AI-powered engine rather than a conventional website. Instead of manual content creation, I engineered a system capable of generating, on demand and via a single API call:

  • Complete multi-page portfolio websites
  • Single-page, PDF-style resumes
  • Professional pitch decks
  • Judge-ready introductory scripts
  • Social media announcements

This was designed to establish a dynamic identity layer that instantly adapts its format and content for diverse audiences.

Tech Stack Overview

Frontend / Output Layer

– Markdown‑based content generated by Gemini

– Can be rendered into websites, PDFs, or slides

Backend

– Python (Flask) API

– Custom endpoints: /generate, /status, /health, /tasks, /connect, /demo

– Structured logging + error handling

– Versioned service architecture

Infrastructure

– Google Cloud Run (serverless, auto‑scaling)

– Docker container for portability

– Environment variables for secure API key management

AI Layer

– Google AI Studio (Gemini)

– Custom system prompt trained on:

– My bio

– Skills

– Projects

– Brand voice

– Social media links

– Output formats

Technical Architecture Overview

This solution is built upon a modern, serverless technology stack, leveraging Google’s AI and cloud infrastructure.

AI and Content Generation

  • AI Engine: Google AI Studio (Gemini) is used as the core AI layer.
  • Customization: The Gemini model is trained via a custom system prompt incorporating:

    • My professional bio and skills.
    • Key projects.
    • My specific brand voice.
    • Social media links.
    • Defined output formats.
  • Output Layer (Frontend): Content is generated in a flexible Markdown format, allowing it to be easily rendered into various final products (websites, PDFs, presentation slides).

Backend and API

  • Framework: A Python (Flask) API manages the core application logic.
  • Endpoints: Essential custom endpoints include:

    • /generate (for content creation)
    • /status and /health (for monitoring)
    • /tasks
    • /connect
    • /demo
  • Development Practices: The backend features structured logging, robust error handling, and a versioned service architecture.

Infrastructure and Deployment

  • Platform: Google Cloud Run provides a serverless and auto-scaling environment.
  • Portability: The application is containerized using Docker.
  • Security: Environment variables are utilized for the secure management of API keys.AI-Powered Portfolio Engine: Design and DevelopmentCore Design Principles

This project was built on five key design decisions, ensuring a flexible, predictable, and professional system:
Generative over Static: The portfolio is designed to evolve automatically, eliminating the need for manual updates and ensuring continuous relevance.
Universal Markdown Output: Markdown was selected as the sole output format for its flexibility and easy conversion into multiple assets:
Websites
PDFs
Slide decks
Social media posts
Clear Task-Based Architecture: Predictability and extensibility are achieved by generating each asset based on a specified task_type.
Human-Centered Tone: A dedicated system prompt enforces a consistent, professional, and engaging voice for every output, ensuring the tone is:
Confident
Clear
Professional
Creative
Future-focused
Serverless Deployment (Cloud Run): This choice provides a robust, zero-maintenance infrastructure, offering:
Instant scaling
A public HTTPS endpoint
Seamless integration with AI Studio
—–Development Workflow

The creation of the AI-powered portfolio engine followed a systematic six-step process:
Identity Definition (System Prompt): A detailed system prompt was drafted to capture my professional identity, including my story, skills, projects, desired tone, and social links.
AI Logic Building (Google AI Studio): The core AI logic was built and rigorously tested in Google AI Studio, iterating on outputs like multi-page websites, pitch decks, PDFs, intro scripts, and social posts until consistency and quality were achieved.
Backend Creation (Flask API): A Flask API was developed to serve as the intermediary, receiving a task, sending it to the Gemini API, and returning the clean Markdown output.
Professional Endpoint Implementation: Standard, production-ready endpoints were added for system management and information: /status, /health, /version, /tasks, /connect, and /demo.
Deployment (Cloud Run): The application was containerized with Docker and securely deployed to Google Cloud Run, utilizing environment variables for security.
Final Testing and Refinement: All endpoints were validated using curl, and the generative outputs were polished to ensure perfection.
—–Google AI Tools Utilized

The engine relies on a powerful combination of Google AI and Cloud technologies:
Tool
Function
Google AI Studio (Gemini)
Used for defining the custom system prompt, task-based generation, multi-format content testing, and overall iteration environment.
Gemini API
The core generative engine integrated into the Cloud Run backend, responsible for creating all portfolio assets, handling structured prompts, and producing Markdown.
Google Cloud Run
The hosting platform for the API, providing serverless scaling, automatic HTTPS management, logging, and deployment infrastructure.

Together, these tools form a modern, flexible, and fully automated AI-powered portfolio engine tailored to my unique identity.

What I’m Most Proud Of

<!– Highlight the features, design elements, or technical achievements you’re most excited about. –> My greatest achievement is the development of a portfolio system that goes beyond simply showcasing my work; it embodies my forward-thinking philosophy. Rather than a traditional, static website, I engineered a dynamic, generative identity layer that constantly adapts alongside my evolution. This project reflects my core belief that design must be adaptive, intentional, and profoundly human. It’s an intersection of artistic expression and technical execution—precisely where I find myself doing my most impactful work.

Qodana for Android: Increasing Code Quality for Kotlin-First Teams

Qodana for Android

When people think about tooling for Android development, the conversation often gravitates towards platform-specific concerns: UI performance, layout validation, device compatibility, or resource management. Yet for many Android teams, the most persistent challenges don’t sit at the platform level – they’re in the team’s Kotlin codebase itself.

As Android projects scale, Kotlin brings strong language features, but it doesn’t eliminate complexity. Null safety can still be misused, functions grow unchecked, architectural boundaries blur, and code style diverges across teams. This is where Qodana for Android becomes more relevant.

Qodana is not an Android framework tool, and it doesn’t aim to replace Android-specific tooling such as Android Lint. Instead, it focuses on Kotlin code quality, automation, and consistency, which are central concerns for modern Android development.

In the past, you might have seen Developer Advocate Anton Arhipov showing users how Qodana works with Kotlin to improve code quality for Android developers. 

Qodana has advanced significantly since then, and the needs of Android developers have grown, too. 

Kotlin as the common denominator in Android teams

At JetBrains, we’re proud of having created Kotlin. While Kotlin is a powerful, multi-purpose language used for backend development and for building cross-platform applications across mobile, desktop, and web, it’s also widely recognized as the primary language for Android development. That’s why bringing strong code quality tooling like Qodana to Android teams is so important.

Within these teams, Kotlin is also used beyond the mobile app itself. Shared libraries, test infrastructure, build tooling, and even backend services are increasingly written in the language.

This creates a need for consistent Kotlin standards across different parts of an organization. In practice, however, consistency is difficult to maintain. Local IDE inspections vary, CI checks are often incomplete, and code review standards depend heavily on individual reviewers.

Qodana for Android addresses this gap by allowing Android teams to enforce Kotlin inspections automatically and consistently in CI systems, using the same inspection engine that developers already rely on in Android Studio and IntelliJ IDEA. This alignment is critical. When local feedback and automated checks disagree, developers lose trust in both.

Creating a CI-first quality gate with code analysis

Most issues that slow Android teams down are not catastrophic runtime failures. They are incremental quality regressions: overly complex methods, fragile null handling, unused code, or visibility issues that make refactoring harder over time.

Qodana runs Kotlin static analysis as part of the software development lifecycle, shifting quality checks earlier and making them repeatable. Rather than relying on manual code reviews to catch everything, teams can use Qodana as a quality gate that highlights high-impact issues before code is merged.

This is particularly valuable for Android projects with multiple modules, long-lived feature branches, or distributed teams. Automated analysis ensures that quality expectations don’t change depending on who reviews the pull request.

What Qodana for Android is, and what it’s not

It is important to be precise about what Qodana does in an Android context.

What Qodana for Android does:

  • Analyze Kotlin code used in Android projects
  • Automate Kotlin quality checks in CI
  • Align IDE inspections with automated enforcement
  • Support scalable quality management across teams
  • Analyze project layouts and resources

What Qodana for Android does not do:

  • Replace Android Lint
  • Analyze device-specific behavior
  • Provide Android framework diagnostics

Qodana can complement existing Android tooling rather than compete with it.

Automation and the reality of scale

As Android teams grow, review capacity does not scale linearly with team size. Kotlin helps reduce certain classes of errors, but it doesn’t replace the need for disciplined code structure and readability.

Qodana supports this reality through automation. By codifying Kotlin quality rules and running them consistently, teams can remove repetitive review comments and focus human attention on business logic and product decisions.

This is not about replacing developers’ judgment – it’s about reserving that judgment for where it adds the most value.

Kotlin quality beyond the Android boundary

One of the less obvious advantages of using Qodana in Android contexts is that it does not stop at the Android module boundary. Many Android teams share Kotlin code across mobile and non-mobile components, and maintaining consistent standards across those environments is often challenging.

Because Qodana operates at the language and project level, it allows teams to apply the same Kotlin inspections across shared libraries and supporting systems. This reduces fragmentation and helps organizations treat Kotlin as a strategic language rather than a platform-specific implementation detail.

In fact, the official Kotlin team even uses Qodana for internal quality and security.

Read Case Study

Practical quality check adoption for existing Android codebases

One of the biggest barriers to adopting stricter quality checks is existing technical debt. Many Android projects are years old, and introducing a new tool that flags thousands of issues is rarely productive.

Qodana is designed to support gradual adoption. Teams can baseline existing issues, focus on preventing new regressions, and incrementally tighten standards over time. This makes it viable for real-world Android projects rather than only greenfield applications.

The ability to configure inspections, tailor rulesets, and control enforcement levels is essential for long-term success.

A Kotlin-centric view of Android development

Seen through a Kotlin-first lens, Qodana for Android is less about the platform and more about discipline. It helps teams treat code quality as an automated, shared responsibility rather than a manual afterthought.

For Android teams working primarily in Kotlin, especially those operating at scale or across multiple domains, this focus is both practical and necessary. Quality issues compound quietly over time, and automation is one of the few reliable ways to keep them under control.

Increase quality and security with Qodana

Android development today is as much about managing complexity as it is about building features. Kotlin provides powerful tools, but without consistent enforcement and automation, even well-intentioned teams drift.

Qodana for Android offers a way to bring Kotlin code quality into the foreground of the development process, embedding it into CI and aligning it with developers’ existing workflows. It is not an Android-specific analyzer, but for Kotlin-driven Android teams, that is precisely its strength.

View Docs

Let’s talk at KotlinConf 2026!

Join JetBrains for the most important Kotlin event of the year and find out how Qodana can help your team boost code quality – right from the source. Take part in workshops and tech sessions at various levels, get direct access to the JetBrains team, network with other Kotlin enthusiasts,  bag exclusive goodies, and attend the Golden Kodee Awards Ceremony.  

KotlinConf Info

Practical Use Of AI Coding Tools For The Responsible Developer

Over the last two years, my team at Work & Co and I have been testing out and gradually integrating AI coding tools like Copilot, Cursor, Claude, and ChatGPT to help us ship web experiences that are used by the masses. Admittedly, after some initial skepticism and a few aha moments, various AI tools have found their way into my daily use. Over time, the list of applications where we found it made sense to let AI take over started to grow, so I decided to share some practical use cases for AI tools for what I call the “responsible developer”.

What do I mean by a responsible developer?

We have to make sure that we deliver quality code as expected by our stakeholders and clients. Our contributions (i.e., pull requests) should not become a burden on our colleagues who will have to review and test our work. Also, in case you work for a company: The tools we use need to be approved by our employer. Sensitive aspects like security and privacy need to be handled properly: Don’t paste secrets, customer data (PII), or proprietary code into tools without policy approval. Treat it like code from a stranger on the internet. Always test and verify.

Note: This article assumes some very basic familiarity with AI coding tools like Copilot inside VSCode or Cursor. If all of this sounds totally new and unfamiliar to you, the Github Copilot video tutorials can be a fantastic starting point for you.

Helpful Applications Of AI Coding Tools

Note: The following examples will mainly focus on working in JavaScript-based web applications like React, Vue, Svelte, or Angular.

Getting An Understanding Of An Unfamiliar Codebase

It’s not uncommon to work on established codebases, and joining a large legacy codebase can be intimidating. Simply open your project and your AI agent (in my case, Copilot Chat in VSCode) and start asking questions just like you would ask a colleague. In general, I like to talk to any AI agent just as I would to a fellow human.

Here is a more refined example prompt:

“Give me a high-level architecture overview: entrypoints, routing, auth, data layer, build tooling. Then list 5 files to read in order. Treat explanations as hypotheses and confirm by jumping to referenced files.”

You can keep asking follow-up questions like “How does the routing work in detail?” or “Talk me through the authentication process and methods” and it will lead you to helpful directions to shine some light into the dark of an unfamiliar codebase.

Triaging Breaking Changes When Upgrading Dependencies

Updating npm packages, especially when they come with breaking changes, can be tedious and time-consuming work, and make you debug a fair amount of regressions. I recently had to upgrade the data visualization library plotly.js up one major release version from version 2 to 3, and as a result of that, the axis labeling in some of the graphs stopped working.

I went on to ask ChatGPT:

“I updated my Angular project that uses Plotly. I updated the plotly.js — dist package from version 2.35.2 to 3.1.0 — and now the labels on the x and y axis are gone. What happened?”

The agent came back with a solution promptly (see for yourself below).

Note: I still verified the explanation against the official migration guide before shipping the fix.

Replicating Refactors Safely Across Files

Growing codebases most certainly unveil opportunities for code consolidation. For example, you notice code duplication across files that can be extracted into a single function or component. As a result, you decide to create a shared component that can be included instead and perform that refactor in one file. Now, instead of manually carrying out those changes to your remaining files, you ask your agent to roll out the refactor for you.

Agents let you select multiple files as context. Once the refactor for one file is done, I can add both the refactored and untouched files into context and prompt the agent to roll out the changes to other files like this: “Replicate the changes I made in file A to file B as well”.

Implementing Features In Unfamiliar Technologies

One of my favorite aha-moments using AI coding tools was when it helped me create a quite complex animated gradient animation in GLSL, a language I have been fairly unfamiliar with. On a recent project, our designers came up with an animated gradient as a loading state on a 3D object. I really liked the concept and wanted to deliver something unique and exciting to our clients. The problem: I only had two days to implement it, and GLSL has quite the steep learning curve.

Again, an AI tool (in this case, ChatGPT) came in handy, and I started quite simply prompting it to create a standalone HTML file for me that renders a canvas and a very simple animated color gradient. Step after step, I prompted the AI to add more finesse to it until I arrived at a decent result so I could start integrating the shader into my actual codebase.

The end result: Our clients were super happy, and we delivered a complex feature in a small amount of time thanks to AI.

Writing Tests

In my experience, there’s rarely enough time on projects to continuously write and maintain a proper suite of unit and integration tests, and on top of that, many developers don’t really enjoy the task of writing tests. Prompting your AI helper to set up and write tests for you is entirely possible and can be done in a small amount of time. Of course, you, as a developer, should still make sure that your tests actually take a look at the critical parts of your application and follow sensible testing principles, but you can “outsource” the writing of the tests to our AI helper.

Example prompt:

“Write unit tests for this function using Jest. Cover happy path, edge cases, and failure modes. Explain why each test exists.”

You can even pass along testing guru Kent C. Dodds’ testing best practices as guidelines to your agent, like below:

Internal Tooling

Somewhat similar to the shader example mentioned earlier, I was recently tasked to analyze code duplication in a codebase and compare before and after a refactor. Certainly not a trivial task if you don’t want to go the time-consuming route of comparing files manually. With the help of Copilot, I created a script that analyzed code duplication for me, arranged and ordered the output in a table, and exported it to Excel. Then I took it a step further. When our code refactor was done, I prompted the agent to take my existing Excel sheet as the baseline, add in the current state of duplication in separate columns, and calculate the delta.

Updating Code Written A Long Time Ago

Recently, an old client of mine hit me up, as over time, a few features weren’t working properly on his website anymore.

The catch: The website was built almost ten years ago, and the JavaScript and SCSS were using rather old compile tools like requireJS, and the setup required an older version of Node.js that wouldn’t even run on my 2025 MacBook.

Updating the whole build process by hand would have taken me days, so I decided to prompt the AI agent, “Can you update the JS and SCSS build process to a lean 2025 stack like Vite?” It sure did, and after around an hour of refining with the agent, I had my SCSS and JS build switched to Vite, and I was able to focus on actual bugfixing. Just make sure to properly validate the output and compiled files when doing such integral changes to your build process.

Summarizing And Drafting

Would you like to summarize all your recent code changes in one sentence for a commit message, or have a long list of commits and would like to sum them up in three bullet points? No problem, let the AI take care of it, but please make sure to proofread it.

An example prompt is as simple as messaging a fellow human: “Please sum up my recent changes in concise bullet points”.

My advice here would be to use GPT for writing with caution, and as with code, please check the output before sending or submitting.

Recommendations And Best Practices

Prompting

One of the not-so-obvious benefits of using AI is that the more specific and tailored your prompts are, the better the output. The process of prompting an AI agent forces us to formulate our requirements as specifically as possible before we write and code. This is why, as a general rule, I highly recommend being as specific as possible with your prompting.

Ryan Florence, co-author of Remix, suggests a simple yet powerful way to improve this process by finishing your initial prompt with the sentence:

“Before we start, do you have any questions for me?”

At this point, the AI usually comes back with helpful questions where you can clarify your specific intent, guiding the agent to provide you with a more tailored approach for your task.

Use Version Control And Work In Digestible Chunks

Using version control like git not only comes in handy when collaborating as a team on a single codebase but also to provide you as an individual contributor with stable points to roll back to in case of an emergency. Due to its non-deterministic nature, AI can sometimes go rogue and make changes that are simply not helpful for what you are trying to achieve and eventually break things irreparably.

Splitting up your work into multiple commits will help you create stable points that you can revert to in case things go sideways. And your teammates will thank you as well, as they will have an easier time reviewing your code when it is split up into semantically well-structured chunks.

Review Thoroughly

This is more of a general best practice, but in my opinion, it becomes even more important when using AI tools for development work: Be the first critical reviewer of your code. Make sure to take some time to go over your changes line by line, just like you would review someone else’s code, and only submit your work once it passes your own self-review.

“Two things are both true to me right now: AI agents are amazing and a huge productivity boost. They are also massive slop machines if you turn off your brain and let go completely.”

— Armin Ronacher in his blog post Agent Psychosis: Are We Going Insane?

Conclusion And Critical Thoughts

In my opinion, AI coding tools can improve our productivity as developers on a daily basis and free up mental capacity for more planning and high-level thinking. They force us to articulate our desired outcome with meticulous detail.

Any AI can, at times, hallucinate, which basically means it lies in a confident tone. So please make sure to check and test, especially when you are in doubt. AI is not a silver bullet, and I believe, excellence and the ability to solve problems as a developer will never go out of fashion.

For developers who are just starting out in their career these tools can be highly tempting to do the majority of the work for them. What may get lost here is the often draining and painful work through bugs and issues that are tricky to debug and solve, aka “the grind”. Even Cursor AI’s very own Lee Robinson questions this in one of his posts:

AI coding tools are evolving at a fast pace, and I am excited for what will come next. I hope you found this article and its tips helpful and are excited to try out some of these for yourself.

29.Delete Backup from S3 Using Terraform

Lab Information

The Nautilus DevOps team is currently engaged in a cleanup process, focusing on removing unnecessary data and services from their AWS account. As part of the migration process, several resources were created for one-time use only, necessitating a cleanup effort to optimize their AWS environment.

A S3 bucket named nautilus-bck-29479 already exists.

1) Copy the contents of nautilus-bck-29479 S3 bucket to /opt/s3-backup/ directory on terraform-client host (the landing host once you load this lab).

2) Delete the S3 bucket nautilus-bck-29479.

3) Use the AWS CLI through Terraform to accomplish this task—for example, by running AWS CLI commands within Terraform. The Terraform working directory is /home/bob/terraform. Update the main.tf file (do not create a separate .tf file) to accomplish this task.

Note: Right-click under the EXPLORER section in VS Code and select Open in Integrated Terminal to launch the terminal.

Lab Solutions

Step 1: Create Main Terraform Configuration

# main.tf

# Execute AWS CLI commands to copy and delete S3 bucket
resource "null_resource" "s3_cleanup" {
  provisioner "local-exec" {
    command = <<EOT
      echo "Creating backup directory..."
      mkdir -p /opt/s3-backup/

      echo "Copying contents from S3 bucket to local directory..."
      aws s3 cp s3://nautilus-bck-29479 /opt/s3-backup/ --recursive

      echo "Deleting the S3 bucket..."
      aws s3 rb s3://nautilus-bck-29479 --force
    EOT
  }
}

Step2: To deploy this configuration

Navigate to the Terraform directory:

cd /home/bob/terraform

Initialize Terraform:

terraform init

Output

bob@iac-server ~/terraform via 💠 default ➜  terraform init
Initializing the backend...
Initializing provider plugins...
- Finding hashicorp/aws versions matching "5.91.0"...
- Installing hashicorp/aws v5.91.0...
- Installed hashicorp/aws v5.91.0 (signed by HashiCorp)
Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.

Plan the deployment to verify the configuration:

terraform plan

Output

bob@iac-server ~/terraform via 💠 default ➜  terraform plan

Terraform used the selected providers to generate the following execution plan. Resource
actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # null_resource.s3_cleanup will be created
  + resource "null_resource" "s3_cleanup" {
      + id = (known after apply)
    }

Plan: 1 to add, 0 to change, 0 to destroy.

─────────────────────────────────────────────────────────────────────────────────────────

Note: You didn't use the -out option to save this plan, so Terraform can't guarantee to
take exactly these actions if you run "terraform apply" now.

Apply the configuration:

terraform apply

Then type yes when prompted to confirm the creation of the snapshot.

Output

bob@iac-server ~/terraform via 💠 default ➜  terraform apply

Terraform used the selected providers to generate the following execution plan. Resource
actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # null_resource.s3_cleanup will be created
  + resource "null_resource" "s3_cleanup" {
      + id = (known after apply)
    }

Plan: 1 to add, 0 to change, 0 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

null_resource.s3_cleanup: Creating...
null_resource.s3_cleanup: Provisioning with 'local-exec'...
null_resource.s3_cleanup (local-exec): Executing: ["/bin/sh" "-c" "      echo "Creating backup directory..."n      mkdir -p /opt/s3-backup/nn      echo "Copying contents from S3 bucket to local directory..."n      aws s3 cp s3://nautilus-bck-29479 /opt/s3-backup/ --recursivenn      echo "Deleting the S3 bucket..."n      aws s3 rb s3://nautilus-bck-29479 --forcen"]
null_resource.s3_cleanup (local-exec): Creating backup directory...
null_resource.s3_cleanup (local-exec): Copying contents from S3 bucket to local directory...
null_resource.s3_cleanup (local-exec): Completed 27 Bytes/27 Bytes (2.7 KiB/s) with 1 file(s) remaining
null_resource.s3_cleanup (local-exec): download: s3://nautilus-bck-29479/nautilus.txt to ../../../opt/s3-backup/nautilus.txt
null_resource.s3_cleanup (local-exec): Deleting the S3 bucket...
null_resource.s3_cleanup (local-exec): delete: s3://nautilus-bck-29479/nautilus.txt
null_resource.s3_cleanup (local-exec): remove_bucket: nautilus-bck-29479
null_resource.s3_cleanup: Creation complete after 1s [id=4571548781870379384]

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

Resources & Next Steps

📦 Full Code Repository: KodeKloud Learning Labs

📖 More Deep Dives: Whispering Cloud Insights – Read other technical articles

💬 Join Discussion: DEV Community – Share your thoughts and questions

💼 Let’s Connect: LinkedIn – I’d love to connect with you

Credits

• All labs are from: KodeKloud

• I sincerely appreciate your provision of these valuable resources.

Why Kubernetes Docs Prefer Headlamp Over the Kubernetes Dashboard

When people hear “Kubernetes UI”, the first thing that comes to mind is the Kubernetes Dashboard.

It used to be the default choice.

But if you read the official Kubernetes documentation carefully, you’ll notice a shift:

For detailed insight and troubleshooting, tools like Headlamp are preferred.

This isn’t hype. It’s architectural reality.


What Is Headlamp?

It is an open-source Kubernetes UI originally developed by Kinvolk (now part of Microsoft).

It is not a replacement for kubectl.
It is a visual layer on top of the kubectl mental model.

Think of Headlamp as:

  • kubectl + context
  • resource relationships
  • RBAC awareness
  • CRD visibility

Why Kubernetes Dashboard Is No Longer Enough

Let’s be blunt.

The Kubernetes Dashboard is:

  • Object-centric
  • Shallow in insight
  • Designed for basic operations

It answers:

  • Is the pod running?
  • Can I delete this deployment?

It does not answer:

  • Why is the pod restarting?
  • Which controller owns this resource?
  • What RBAC rule is blocking this action?
  • How do these objects relate to each other?

Modern Kubernetes clusters are systems, not collections of objects.

Why Headlamp Is Preferred

1. Resource Relationships (The Most Important Reason)

Headlamp shows ownership and flow:

  • Pod → ReplicaSet → Deployment
  • Service → Endpoints → Pods
  • Ingress → Service → Workload

This directly matches how Kubernetes works internally.

The Dashboard treats resources as isolated items.

Headlamp treats them as a reconciled system.

2. Matches How Engineers Actually Use Kubernetes

Experienced engineers think in terms of:

  • namespaces
  • contexts
  • YAML
  • controllers
  • CRDs

Headlamp:

  • Shows full YAML
  • Allows inspection without hiding complexity
  • Treats CRDs as first-class citizens

The Dashboard tries to abstract YAML away.

That abstraction becomes a problem in real clusters.

3. CRDs and Operators Work Properly

Modern Kubernetes is operator-driven:

  • Argo CD
  • Prometheus Operator
  • Cert-Manager
  • Istio
  • KServe

Headlamp:

  • Auto-discovers CRDs
  • Displays status fields correctly
  • Understands custom schemas

Kubernetes Dashboard often:

  • Ignores CRDs
  • Renders them poorly
  • Breaks with non-core resources

That alone disqualifies it for production use.

4. Security Model That Aligns with Kubernetes

Dashboard:

  • Runs inside the cluster
  • Requires long-lived service accounts
  • Encourages risky RBAC shortcuts

Headlamp:

  • Runs locally or externally
  • Uses your kubeconfig
  • Respects RBAC exactly like kubectl

No extra attack surface.
No privileged dashboard pods.

This matches Kubernetes security best practices.

5. Designed for Insight, Not Click-Ops

Dashboard was built for:

“Click buttons to manage resources”

Headlamp is built for:

“Understand what the cluster is doing”

That difference matters when:

  • debugging production issues
  • tracing failures
  • understanding controller behavior

Why Kubernetes Documentation Leans Toward Headlamp

Kubernetes today assumes:

  • CRDs are everywhere
  • Operators manage most workloads
  • YAML is unavoidable
  • Security matters more than convenience

Headlamp supports how Kubernetes is actually used today, not how it was used years ago.

That’s why it’s preferred for detailed insight.

When the Kubernetes Dashboard Still Makes Sense

Be honest — it’s not useless.

Use Kubernetes Dashboard if:

  • You are teaching beginners
  • You want a quick demo
  • You need very basic visibility

Use Headlamp if:

  • You run real workloads
  • You debug failures
  • You work with operators and CRDs
  • You care about RBAC and security

Final Takeaway

Kubernetes is complex by design.

A UI that hides that complexity becomes a liability.

Headlamp doesn’t simplify Kubernetes — it explains it.

That’s why Kubernetes documentation points you in that direction.

If you work with production Kubernetes clusters, Headlamp isn’t optional — it’s practical.