HELGE SVERREAll-stack Developer
Bergen, Norwayv13.0
est. 2012  |  300+ repos  |  4000+ contributions
Tools  |   Theme:
Content Notice! This post is really old, and no longer reflects my skill level, views or opinions. It is made available here for archival purposes (it was originally on my old WordPress blog). Keep that in mind when you read the contents within.
Stupid Mistakes I Learned From
November 10, 2016

1. UPDATE users SET username=newusername;

Do you see it?

Yup, I forgot the WHERE clause, causing EVERY USER to have its username set to "newusername", this is an easy mistake to make, however, its not a mistake you want to do in PRODUCTION...

Not that you should ever have to actually go into the production database to write SQL manually anyways, but you know the startup-"just get it done now"-mentality.

How did I fix it? Well I was using HeidiSQL at the time, and fortunately, it caches all the data from the table until you "refresh" the list view, So I could quickly select all rows in the users table in HeidiSQL -> right click -> Export Grid Rows -> Copy to Clipboard -> Paste into the Query tab and change it all back to the original state.

Pheeeefffff, should have been fucking fired.

What I learned:

  • Double check your SQL query, every time.
  • Don't fuck with the database in production.
  • Don't fuck with the database in production when you haven't double checked your SQL query.

2. Violently unplugged an important server.

In my IT apprentice days, I was tasked with disassembling a decommissioned server from the server room, however I spared little time actually checking which and where the server actually was located, and unmounted the oldest looking POS I could find in the rack, unfortunately assuming it to be the right one...

It was not.

The server was powered on (the power light was very dim and I didn't see the green light at the time), while I removed its power cable, and moved it to its final resting place, the bin.

Little did I know that apparently this old computer actually was... An extremely fucking critical database server ಠ_ಠ

I had managed to brick its power socket with my "less than delicate"-handling of the hardware, and it would subsequently not power back on.

How did I fix it?

There was another computer of the exact same make and model lying under the table we used to disassemble and replace parts.

I took the harddrive from the super-critical-db-server™ and chugged it into the replacement computer, booted it and beep beep wooooshhhhh it booted.

What I learned

  • Never assume anything when dealing with expensive servers or production, be 100% certain.
  • Be gentle with hardware, even if it is going in the trash.

3. git push --force

There was a dark time before I fully understood how git actually worked, I was working on my local develop branch developing some important feature for a web application, the day came where I deemed the feature "good enough for prod", and tried to commit and push up the changes.

Nope.

Ignoring all error messages, I had previously read about --force, and thought of it as a magical "Just fucking work" flag, completely my own fault that using this flag, turned out to fuck up the changes that was in the repo and overwrite it with my local dev branch, which at that point was outdated, we had to spend a day getting the repository back up to where it was supposed to be and merge together all changes that were supposed to be merged.

What I Learned

  • If a command or program has a --force option, don't use it if you don't know what the fuck you're doing.
  • Always read the error message, if you don't understand it, google it.



<!-- generated with nested tables and zero regrets -->