A homepage section

Building your own game console

That’s quite a sensationalist title!

I didn’t honestly set out to build a console, but it was a fun journey and I thought it would be great to share. There’s a couple learnings I would like to pass on and a couple hopes and dreams I have for the future.

Living in South Africa, I sometimes feel a little out of touch with all the cool gadgets that you can obtain overseas relatively easily and seemingly cheaply. *cough* Arduboy *cough*. Thankfully I’ve been exposed to a couple pieces of awesome hardware from some cool people, via chatting and conferences, where they sometimes give out awesome badges.

My journey started about 3 years ago when I was given an ESP8266 for almost nothing from a friend who ordered them from overseas. All you have to do is write a bit of C code, upload via USB and it can host a website!

I wrote a previous article about my earlier ESP8266 tinkerings here.

On the back of that I got the BSides Cape Town 2016 badge which had some epic hardware on it too and for the past 6 or so months I have consistently been coming back to this project.

What was I trying to do ?

Initially my goal was to write a game for the BSides Cape Town 2016 Badge.

I have always wanted to write games and for various reasons, I’ve never finished a full game, so more than anything I wanted to write the full game start to finish and release it into the wild in some form or fashion.

Pic stolen from Andrew’s great blog post on the BSides 2016 badge

The badge was perfect for it. The ESP8266 is a great processor, the LIPO on the badge was great (well over 2 hours battery life ), the screen has a 128×64 pixel size and it has 8 buttons! There was a single caveat and a phrase I still remember, “We’re all red on the inside” which was how you remembered that the red wire of the battery goes on the inside two pin connetor or … FIRE!

I got Arduino IDE up and running, installed the required plugins (Screen and some small other libraries) did a few minor updates to get the code working on the latest versions of the library, the SSD1306 screen library had some awesome speed tweaks, saying you could now get over 60fps on it, using the BRZO I2C code which had optimised most of the code to assembler.

Sounding great! I actually got the game up and running and wrote the first part of the story. Yes, I came up with a story…

The Story

The game was going to be about the journey to DefCon. You fly a plan to the States, then to Las Vegas (Stage 1 and 2)

You drive from the Airport to the Hotel ( Stage 3 and 4)

And you then have to hunt your way through the hallways to the conference venue looking for the talk you’re meant to attend (Final Stage). Initially this was going to be a beat em up about fighting your way to the talk, but that seemed a bit violent in the end…

There were a couple special effects and some code I’m really proud of in there, so I’ll come around that that again, but here’s the win screen for now.

Videos will follow once I have a change to explain them all.

The whole concept sounds easy enough, and I can happily say I did complete the game to the level it deserved :P.

If you want to give it a go, it’s on this repo here. And this leads into my favourite part of the project and the continued work I’ve put into it. You can play it in a web-browser here and the maze game here, plus the voxel demo I made here.

The pivot

So after getting to a point where I liked the game quite a bit. I broke my BSides badge, very sad panda face, I had some guys help me fix it and it worked for a while again… until I broke it again. This put me on a multi path journey. I didn’t want to be without hardware again, so I would build my own and additionally, having to deploy to the badge everytime to test my current work was becoming a bit tedious, so I decided that I would write a game engine that worked on my PC as well.

Build your own badge adventure

I was happy with components of the ESP8266 and the SSD1306 screen, but I was very keen to have some sort of novetly feel to it (and I didn’t have buttons) so I researched a ton and found a way to do touch input over pencil.

The idea was great and it looked greatm but…

I honestly spent too much time on this, but the learnings I gained were useful. I did decide to go with touch buttons hooked up to the analog signal on the ESP using digital input pins as 3.3V output drivers. Here’s a link to the original instructable I found on the topic. Then I found a way to multipliex the analog input since the ESP8266 only has one, here.

And with that I had avoided needing to buy buttons or a multiplexor chip like was on the original badge.

For power, I went with 2x AA batteries which can power the ESP8266 directly without any additional hardware.

The screen and processor slotted on nicely and with the updated libraries I was getting great frame rates on simple code.

I wired it all up on a breadboard and I was A for away

4 input buttons on the left, the screen and processor in the middle, and down right I added audio at one point!

After this I moved onto the protoboard version with a Wemos D1 Mini which was nice and small, while still having full functionality of the bigger ESP8266’s.

Speakers on the back… sometimes.

Play it anywhere

I also decided that making this cross compile was now a goal too, so that I could write and test the code on my desktop computer and then do slightly less vigorous testing on the device and a little less often too, to tighten my own internal feedback loop.

Brett Victor gave an amazing talk on Inventing on Principle a few years back and I always keep it in mind.

So I worked through it piece by piece and got the code to compile on the

Windows Console

Running Maze Runner on Windows in a Console Window

Then the Linux Console

Running the Matrix code in a Linux Console

Then SDL

Running the Matrix code via SDL2 on Linux

Once I had SDL up and running, I realised that Emscripten worked with that so I did some small code updates for that too, and that got me the engine running in a Web-browser. (Up, Down, Left, Right on keypad and WASD on rest of keyboard for input)

Maze game in a Web Browser

That was an amazing experience as getting the code working on all those platforms made me really thing at what needed to be abstracted and where.

The biggest items were input, timing and output, for which I built seperate parts of the code for into the Platform specific files and for things which were platform dependant, I used CMAKE to be able to build different libraries and code on different platforms.

CMake Logo

For the ESP8266 there was a lot of issues performance wise without it having a FPU (Floating point unit) and wanting to do precise maths.

This affected the RotoZoomer , maze game and voxel demo.

I took a few hints from Michael Abrash’s Graphics Programming Black Book and implemented Fixed Point maths, which essentially means you fake maths with decimals using only decimal numbers ( and bit shifting )

Mowr Pivot ?

I decided to document the process and build out a gaming platform similar to the Arduboy and release it with it’s much more accesible components (to me).

And thus was launched ESP8266 Game On

I refactored all the games to be a single theme and they sit on their own branches.

As of this writing there are all the original game elements as seperate games

Driving, Flying, the RotoZoomer and the Maze

And as the new flagship title I implemented my own from scratch game Asteroid!

ESP8266 Game Engine Asteroid! in play

What features does the game engine have

Multi-platform (Windows, Linux, SDL, HTML5 and ESP8266)
The code runs on all the above listed platforms. Catering to the main loop, the timing, display, input and audio

Pixel Perfect Collision Detection
Since the sprites are basically bitmasks, you can double up their usage as a mask for pixel perfect collision detection in game, the code uses bounding box checks first, then pixel checks if the bounding box is breached for great performance.

Fixed Point Maths
To get great performance of complex decimal calculations for some of the features below, I impelemented my own library to help with these calcs.

It does some basic trig and multiplication and division, it’s all done in mind with the ESP8266, but it does work cross platform.

Rotational Effects
Think Super Nintendo Mode 7 type effects. Zooming and rotating sprites. The asteroids and ship in Asteroid! are only rotated, not animated per se.

Powerfull Hardware platform
The ESP8266 is surprisingly powerful, able to play tech demos like Wolfenstein mazes and Voxel landscapes, when used directly.

Built in Font Engine
There is a built in font with help functions, specifically built for the screen size

-= Asteroid! =-

Text scrolling helper
Nice easy way to get your story on screen and going. See the full video of BFlight below.

Fully Open-Source
All the code is available online here

Access to a few complete samples
There are a samples for a few common game types, and the raycaster engine I built for the maze game is quite flexible.

Where to from here ?

I am quite happy with the outcome of this project, but I’d like to complete the loop on adding audio, it’s built into the code base already, but I need to do the reference hardware design and build a game with it in mind.

There is still a feature list for things I’d like to add, such as saving scores onto the SPIFFS and not losing them between game installs.

Multiple games on the SPIFFs would be nice and could be doable with a SPIFFs flasher integration.

Some sort of platformer game would be really nice to do, it’s on my todo list for later to flesh out the engine around handling maps and sprites.

And finally I am looking at putting more reference design documentation toghether and building a few more games to flesh out the engine’s functions.

I think it’s at the point where I can gauge community interest in a project like this and get feedback to help drive a course forwards.

DefCon 26 There and back again

DefCon 26

I was very fortunate to get an opportunity to go to DefCon 26 with a friend of mine for a project he was doing. He needed some extra hands on deck and being in the right place at the right time was fortuitous.

What’s a DefCon ?

It’s an Annual Security Conference in Las Vegas and the biggest global gathering of Security Experts, Black Hats, White Hats, Hackers and anyone at all interested in computer and network related security

30,000 people make it through the doors every year and there are 6 tracks going on all day every day over a period of 4 days, there are also a ton of conference areas for specialist interests and knowledge sharing, as well as hacking contests.

https://www.defcon.org/

There are two other events which take place at the same time. BSides Vegas and Black Hat

Why was I there ?

ElasticNinja had designed, made and manufactured about 150 badges for Monero to give away at DefCon, it was the uber LCD badge.

#BadgeLife
Monero

And on the side I was keen to get a bit of the experience of Las Vegas, being in the US, and being at such an awesome conference.

Badges! You want badges?

Badges always makes me think of the scene from Bad Boys

Badge Video , Badge Code , Badge PCB

The badge was priority number one first and foremost, get that out the door and into peoples hands, then think about the other stuff. There were a couple steps to doing that.

Proir to DefCon

  • Get solution to coin challenge ready

Before DEFCON

  • Get to Las Vegas via 3 countries
  • Get Batteries for Badges
  • Charge all batteries
  • Fix everything that unexpectedly went wrong

At DefCon

  • Run a coin challenge competition
  • Monitor solutions
  • Hand out badges
  • Fix broken badges

AFTER DEFCON

Party!

PRIOR TO DEFCON

Getting the coin challenge solution ready prior to DefCon was great fun. My first step was to actually solve ElasticNinja’s challenge, since he didn’t pass out the solution…

You can see the Challenge up at https://monerobadge.org/

At the conference you had a chance to get hold of a Coin from the Monero desk. There were ~1000 of these for the ~150 badges going.

The 4 coins representing the challenge to win a badge

You had to decipher the coin using the information encode on the front and back to get a encryption key, you would then use this encryption key to send us an encrypted tweet, that no one else had sent, saying something funny.

Once we had recieved and confirmed the tweet you would win a badge which you could collect from us.

I put up a ‘cheat sheet’ for my solution at https://github.com/tonym128/monero-badge as well as a running version of the encoder / decoder at https://tonym128.github.io/monero-badge/

The basics of the solution were to work out that the encrypted data was the longer of the two (on the back of the coin) and the encryption key was on the front.

Once you get that, you hopefully get that you’re working with HEX and convert it to ASCII, then you have to start cycling through the different mechanisms for encrypting data using a key.

It’s a bit of a leap to get to the XOR operation, but once you do, you’re laughing. Unfortunately the key doesn’t work 100% and it’s been corrupted as an extra detterent. Once you’re got that, tweeting to myself, @fluffypony and @elasticninja got you a badge!

BEFORE DEFCON

Getting to DefCon was quite the trip, it was 3 flights, with the longest being a 16 hour long haul, which is close to the longest commercial flight on earth ( I checked, Auckland to Doha seems like the longest at 17h50m )

Once checked in at the MGM Grand, I had a few hours to kill and figured I would do something useful and grab the batteries.

Image result for mgm grand vegas
Just ridiculous in size
This guy stared at me from everywhere!

I took a walk down to the peeps who offered to store the batteries for us in Vegas and while it was only 3km’s away, it was my first exposure to Vegas heat, I think it was over 40 degrees

Just a nice 3km walk, if I recall correcly I asked for a glass of water when I got there, but I was a little dazed and confused. I took a Taxi back.
Image result for don't go outside

The first lesson I learnt in Vegas, don’t go outside.

Taxi’s are your friends, air conditioning is your friend!

Once we had all the batteries charging 300 batteries with 15 x 4 battery chargers was up next! We had to charge every battery from almost flat and that took 8 hours. With the amount of charging needed before the conference I had to wake up to swap the batteries out…

One round of charging!

After charging a couple batteries, we realized the batteries didn’t have the little nubs that connect them to the terminal on the battery holder. Interesting for the most standard battery ever made, the 18650… it wasn’t so standard!

Thus ensued the soldering of 2 x bigger nipples onto… every… single… badge!

The nipple soldering station, not quite as exciting as it sounded.

Staring at the smoke detectors for 2 minutes after soldering everytime was quite ‘entertaining’.

There was some time for a break and a coffee before the next disaster struck.

Was interesting taking part in the institution of American Coffee’s

Whilst being really clever and getting all the badges ready for transport to the conference, we figured we’d load them up with batteries to just be able to hand them out, BAM done.

So here is ElasticNinja programming at night, with the beautifull red ‘I’ve got a battery in’ light, which should have lasted for weeks on these two batteries.

Ready to rock, batteries all plugged in, all tested with a copy of the firmware, rock on

The next morning when we woke up, we saw the badges were at 60% power, without having been turned on. The little red led was not working quite as expected, this meant taking all the batteries out, charging them again, as well as removing the led’s resistor…

Skip forwards another 20 hours later and we were ready to rock… AGAIN!

ElasticNinja was working on some additional code quite a lot of time and bug fixing a bunch of things (including writing Blockchain the game).

I had to fill my time up with something and came up with the emoji badge! Since it was pretty easy to get some graphics for it and the code was relatively simple (show a picture), I made the graphics for it and ElasticNinja implemented it, since I wasn’t so good with getting a development environment up.

The Emoji Badge!

And just to push the old school graphics effects on the awesome pixel array, I found a plasma effect in C, it wasn’t totally optimal, but after a bit of hacking it made it’s way into the badge as well!

Eye searing plasma!

After getting all the badges flashed and getting a late lanyard delivery, it was time to setup them up, first off…

Remove all the metal pins we cannot use and replace them with orange zip ties, that we chose because we’re so colour awesome!

Check out the epic orange zip ties!

AT DEFCON

DefCon was pretty amazing when it started, we missed a substantial part of the first day as we were gunning away on most of the above still, but we did make our way there and see the sights later.

Cool graphics definitely make talks even more interesting, but this was pretty good even without the graphics

I don’t have a lot of pictures at the conference as it’s a kinda a ‘no no’, to randomly go around taking pictures of people, but it was super busy and there were more people than I’d like to have counted everywhere you looked.

It was pretty phenomenal to see the amount of rooms and talks going on in addition to the 6 main tracks. There are rooms for body hacks, hard disk duplication, social engineering, pretty much anything do with security or hacking and it was there.

We were VERY succesful in handing out the coin challenge tokens and all 1000 tokens were gone as fast as we handed them out. On day 2 we had to put images online for people to use in the challenge.

I spent a ton of time, co-ordinating with people to either help them solve the challenge or to show them how to use their prize and also fixing badges, we had a couple failures, but wasn’t too bad. ElasticNinja was able to work magic with the broken badges! I was good at replacing batteries too 😛

This was the one photo of me actually at DefCon, I was hacking away at 5 badges and an official photographer grabbed this photo of me

After DefCon

It was finally time to party, but I was pretty tired after all this and came down with a cold, so I was sick the last night of DefCon and missed out on a massive party.

Nothing says fun like the party bath full of beer

And the following day we were off, we were leaving Las Vegas to grab a flight out of Los Angeles and there wasn’t a flight we could take, so we grabbed a rental car.

We might have got something a tiny bit better than the stock model. The Dodge Charger 5.7L V8 HEMI.

What did I learn ?

DefCon was pretty amazing, a lot of firsts for me, but more than anything it was pretty cool to get exposed to the massive world of hacking and security.

As well as seeing how much people care about and have fun with an area of technology that I’m less exposed to. Also, now when I see security articles, I sometimes have a reference point to expand my knowledge base off rather than just skip to the next news item.

Where to from here ?

I’ve actually started hacking with hardware a bit more and the interest there has grown quite a lot for me, I’m hoping to have a project to show off soon, but it is a “When it’s done” type affair.

Everyone always says your first DefCon is a whirlwind of experiences that you just get swept up in, and this was certainly true for me, I’d like to go again sometime, and knowing what I now know, prepare a bit better, see a bit more, learn a bit more, hopefully maybe even teach some people some stuff.

Till the next time!

WordPress, how many ways can I say hello ?

I find it funny that I can write programs that can calculate risk on a share portfolio, do heavy math calculations on tiny hardware devices, help write programs that massive enterprise businesses use to stock take , but putting up my first WordPress blog, was just as massive en devour to get something I was happy with.

It didn’t involve writing a line of code, but there were a lot of lessons!

What did I want to accomplish ?

Mostly was super keen to get a nice website up, quickly and easily to share information with people. I have always heard … things… about WordPress, so I thought it a good opportunity to give it a go.

I had a few things I really wanted to get into the project

  • Self hosted
  • SSL
  • Easy to update
  • Some sort of database for the articles
  • Potentially multi user for one day
  • Follow web best practices
  • Small and fast
  • Themed as a bonus for a ever changing look

What did I learn early on ?

I started by testing on WordPress.com just to see if I could be happy enough, and I was, but onto the first point, self hosted. So I decided to move over to old faithful Amazon Web Services and to use a really cheap Lightsail instance.

BitNami provide a nicely packaged WordPress image you can spin up at the touch of a button in Lightsail.

WordPress.com and WordPress.org are slightly different beasts, I had a few assumptions about being able to easily transfer between the two, but if you could it was more complicated than I could figure out in the time I was willing to invest. I think if you upgrade you account you can.

Self Hosted, Easy to Update, Follows Web Best Practices, Themed, Database driven, Multi-User

SSL, Small and Fast.

Not doing too bad at the moment, only two things remaining on my little check lists, or at least I thought!

The lists of things started then, getting it hosted on domain, making it load faster than it was, getting SSL up and running, and getting a nice book type read to it. So just a couple things 🙂

I got a Domain setup pretty quickly using AWS Route 53, the hardest part was waiting 5 minutes, for it to come up.

SSL took me a while but using LetsEncrypt , a very cool project to provide safe and free SSL certificates to encrypt all the things, and following a guide here got me over the line.

There were a ton of amazing plugins that I played with along the way, but I really enjoyed a few that pushed some new technologies that I was interested in into the blog.

Yeost SEO, Glue for Yeost SEO and AMP, Amazon Polly, All-in-One WP Migration, All In One SEO Pack, WP Mail SMTP.

Getting Google Analytics up was a good idea, mostly makes me cry, but a good idea none the less. I think this was all through Yeost SEO, but I used a few along the way.

The most fascinating time sink was AMP and getting that to work. It’s a special technology developed by Google to make pages load really quickly. Similar to Facebook Instant Articles.

I used almost all the plugins I could find for WordPress and eventually settled on a defunct AMP one, but have since converted to Glue for Yeost SEO and AMP.

Amazon Polly transcription was pretty easy to integrate and I find it fun listening to her drone my own words back at me, so much so that I’ve left it enabled 🙂 I really liked that it uploads the article MP3‘s into a AWS S3 Bucket which it can then pull them from just as MP3 files!

Where too from here ?

I think the site looks pretty good now, has a few good features and mostly gets out the way to let me write new content quickly and get it out there.

What’s in store for the future ?

I think that cool changes are coming soon, from the new Gutenberg editor for WordPress which I’m finding really cool after a few tries using it. To the new Headless CMS movement, which is a bit of a off shoot of the whole React Website with a API / Database sitting behind it.

I am honestly still tempted to roll my own, but that’s offset to let me rather keep exploring and sharing new technologies and my first forays into them.

I hope you enjoy the articles and potentially set up you own blog or experiment with something you learnt here.


The art of the illusion with photogrammetry

The Problem

This was one of the most amazing finds of the last year for me. As someone with no real classical artistic talent, I find the idea of taking real world assets and getting them into computers fascinating.

There are many different ways of doing this and processes for it. There is a way of animating called Rotoscoping in which you take a stream of pictures and trace movement to get a life like feel for animation.

horse
Rotoscoping

There is cel shading which is the idea of taking an image and transforming it into something that looks more two dimensional and flat.

Toon-shader

Of all of them, the most magical to me has always been photogrammetry.

Photogrammetry is the science of making measurements from photographs, especially for recovering the exact positions of surface points. Photogrammetry is as old as modern photography, dating to the mid-19th century and in the simplest example, the distance between two points that lie on a plane parallel to the photographic image plane, can be determined by measuring their distance on the image, if the scale (s) of the image is known.

https://en.wikipedia.org/wiki/Photogrammetry

In 2008 Microsoft released a service in beta called PhotoSynth. I recall the service being deemed as the next big thing in Photography and it was truly magical, given enough two dimensional normal camera photographs it would calculate a 3d point cloud which could be used to model the images in 3d space, much like 100’s of Minecraft blocks. There was an idea, that one day, you would be able to take you one photo and with the massive resources of PhotoSynth behind you, you could augment your photo’s and have them turn into 3d models.

I took my ~200 photo’s of a statue in Cannizaro Park of Dianna and the Fawn and I turned these photo’s into the most amazing 3d model, which I could spin around and view from absolutely any angle, it was like magic.

Photosynth2011Synth
PhotoSynth

Unfortunately this was not to last and the service got degraded in time, to be a (very good) photo stitching service.

The problem was with this service gone, there was no easy way for me to take my beautiful flat 2d images and turn them into a fantastical amazing model and they sat, dormant, sad and unanimated since 2010.

Cannizaro Park 023
Diana and the Fawn

The Solution

Fast forward to a week ago and I found a wonderful program called Meshroom by Alice Vision.

anim-meshroom-once.gif
Meshroom

This program promised the same goodness as Microsofts PhotoSynth, but all run locally using the power of your single local computer.

I was able to bring my images back to life using Meshroom and Blender.

Logo_Blender.svg
Blender

And following an amazing video by CG Geek.

Loading my ~200 images into Meshroom with default setting and hitting go resulted in a lot of waiting, but a pretty spectacular result.

photos.jpg
A small sub section of the images
Meshroom
Loaded into Meshroom

And two hours later I had a 3d model with texture mapped onto all created from my 2d images.

There’s a couple things that are important using this technology :

  • Good high quality images, without blur
  • A camera which is known by Meshroom or willing to add it.
  • About 60% overlap between photos to allow the algorithm to work
  • Willingness to learn a bit of Blender (which was pretty complicated for me)
  • Patience

The most rewarding part of this project for me was the ability to bring back to life my photo’s from a previous project of mine, they were uploaded to Google Photo’s at some point and there they have sat for 8 years. They are stored using the free version which does introduce some additional compression, but they seem to have weathered time well enough.

Once I imported the Obj Model from Meshroom into Blender I was greeted with a lot of complicated screens and information.

Blender2.JPG

There a 3D Model which included some of the surroundings and 4 texture atlasses for the scene which is essentially a flat texture with information for how to project it onto your model, kind of like throwing a carpet over a manequin but doing it in a specific way everytime to end up with the same result.

I was very pleased with the model.

WireFrame.JPG
3D Wireframe Model

And the texture look wasn’t looking too bad either

Textured.JPG
Textured in Blender

Unfortunately my final rendered output was looking like it was in need of a bit of attention.

Dianna 1.JPG
A very cement look and a lot of additional scenery

I was able to animate this and get a result out into video, which I was very pleased with

But there was more work to be done, the grounds around the statue didn’t look very good, but it was a quick step to remove all that additional scenery and put some lights into the scene to polish it up.

Since I had taken these photos in normal daylight without a flash, the idea was to light the model up as evenly as possible, but not introduce any new shadows, which was just a check box inside Blender for the lights.

And this took me to my final output

This video is a more dynamic pan around to show that I can now view any angle  I want, and while it is a little too shiny still, it does represent the statue much better now.

Where to from here

The next steps for me are to see if Meshroom can map an indoor area properly with better photo’s. Indoors pose a unique problem that a lot of these algorithms work on the texture or object to work out how they are moving in a scene and single colour painted walls do not represent well, so you need to have interesting rooms if you hope for this to succeed.

I think it is amazing how far this technology has come, and what you can now do with Open Source projects which are out in the wild for anyone to try. I’m quite keen to see about how much of my world I can map.

With mapping technology like Google Maps and Bing Maps, how long is it until they can apply this to our StreetView photos and give us a literal like for like 3D View of what we would see and could walk around.

How much more immersive would Google Earth be with technology like this behind it ?

They are using similar technology already, but if you could source ground level data, this could essentially turn our whole world Virtual.

Imagine deciding to play Grand Theft Auto or Just Cause and choosing your last holiday as the location or your local neighbourhood, because no longer are the artists going to be constrained by the environment, they’re going to focus on making dynamic stories which adapt to the environment you choose to play in.

We’re not quite there yet, but how about Tower Defence on Google Maps

What I learnt from interviewing over 50 people in a year

I went from literally no interviewing experience to an experienced interviewer in the space of only a year.

I found it a fascinating experience and thought it would be good to share.

What I want to accomplish

I’d like to reflect on the experience and share some of what I learnt, as well as what I wish I had known when I started out, like the things I wish I’d known to do, and what not to do.

What was my first interview like

I think I was more scared than the person I was interviewing, I’d been at my job for 2 days and since my company was in hyper-scaling mode, I got asked to interview someone. At previous jobs I’ve done interview training and seconded interviews, but this was my first time going in solo to an interview to decide if someone would work for our company or not.

The most surreal part of the interview was when the candidate asked me if I liked working here and the best I could do was look around and then said the people here seem very nice.

What was my last interview like

My last interview was a culmination of a lot of practice and experience, I felt like I was running a well oiled operation, my questions were starting to feel a little tired , the current set, it wasn’t my first rotation, but that also meant they were running really smoothly.

The highlight for me was that even though the candidate was struggling a bit, I was managing to adapt and scale the question back on my feet, so that we were able to keep moving forwards and putting pieces on top of one another leading to a conclusion.

I think the best compliment I received that, even though he had been pushed, he’d had a great interview experience.

What do I wish I knew at the start

I wish I knew how much I was going to enjoy doing this and how great an experience it is interviewing a great candidate. Those were pleasant surprises and I would have maybe pushed to do this sooner if I’d known.

It was also a great way to get exposure to different divisions in the company (Human Resources, Recruitment, Management) and driving my companies processes around hiring and what we wanted and expected from candidates.

It was rewarding and fun to work in a different area to my normal day to day work as well, giving me something to look forwards to and break the normal days grind, while still providing a lot of value to the company.

Do

  • Prepare
  • Know what you’re trying to achieve
  • Interview on behalf of your company
  • Time management
  • Take Notes in a template
  • Make it a great experience

See here for the expanded list

 

Don’t

  • Lie
  • Put your company down
  • Be flippant
  • Judge Candidates Against Each Other

See here for the expanded list

Closing

I found it great in that I was able to learn what it is like being on the other side of the table to hone my own skills, as well as exploring scenarios with people and being able to take a small walk in their minds and see how they tick.

If you can, I’d suggest giving it a go to see if you like it.

It’s a great way to get exposure to different people and parts of your company and build a new skill.

Interviewing do’s and don’ts

This is a follow on to another blog I wrote, What I learnt from interviewing 50 people in a year, to detail a bit more in depth some of the do’s and don’ts that I found in interviewing and training interviewers.

Do

Prepare

Know your subject and questions. If it’s an interactive question where you are asking them to do something, you should definitely have done it yourself first and also try to find out all the possible solutions you can. This means you can help guide candidates as well as try to avoid them going down dead ends, by nudging them when necessary.

Know what you’re trying to achieve

This is probably the biggest piece of advice I could give. I think it’s extremely important to have goals while interviewing and go for them, if you’re blindly letting the interview go it can be a very meandering experience and depending on who you are interviewing this could go well, or go very wrong. Some people need to be coaxed into opening up and this is easier to do when you have objectives.

Interview on behalf of your company

You’re not actually deciding if you would hire this person or not, you’re deciding if they would suit your company or not and if they would provide benefit, I found this usually lined up with whether I would hire them or not, but there were subtle differences in who I would personally hire and who would suit the company best.

Time management

You’ve only got so much time with your candidate, you need to make sure you learn what you need to during your time with them. The things that make them someone who would work well with your company and the areas where they would bring diversity and what negatives they could bring too.

Take Notes in a template

I found a good template for taking notes after an interview invaluable, it helped me coalesce my thoughts into a final answer, I would start with a brief paragraph summary of my experience, then a list of things I liked and disliked, followed by digging a bit more in depth into parts of the interview that struck me as important, and finally I would go back to the start and give a yes or a no on the candidate, I only did the final single word answer after doing the rest, as sometimes writing your thoughts down can help you finalize your experience and answer. This was great whenever we didn’t have a meeting to decide right away, or if I interviewed a few people in a short period of time.

Make it a great experience

You should try to push the person you’re interviewing to learn about them and how they deal with hard problems and questions, but there is no reason you should be doing anything to make them unduly uncomfortable. I firmly believe that no matter what an interview can be a great experience for both parties with a good preparation and a little effort on the interviewers part.

Don’t

Lie

You should never say anything untrue, we all want to portray the best sides of our company and that’s fine, but never stretch that tale too far, if you convince someone to come work for you, it should be what they want as well and they shouldn’t end up feeling tricked at any point in the future, because of something you said.

Put down your company

This almost goes without saying, but since I’ve seen it happen it does bear mentioning, you’re welcome to share some of your companies failings but this should never be in a bad light.

Be flippant

This is something great to do doing and something worth doing well, no matter how experienced you are, this is an important task and you should take it seriously.

Judge Candidates Against Each Other

Judging your candidates against each other is usually a mistake as it doesn’t give you a consistent measure of comparison and if you interview 5 candidates on month, and 5 the next month, they could all be fantastic the one month and not so great the next. What you need to measure against is something internal to your company, so no matter when you interview or how many people you do in a batch, it’s always a consistent metric you’re using.

Closing

There are 100’s of do’s and don’ts and most people will find their way given some time. This is just a few guidelines that stood out to me during my journey.

 

Is Raytracing the future of rendering or the next big fad ?

I was surprised that Ray tracing made a massive resurgence at GDC 2018.

Ray tracing has always been the alternative to Polygonal Rasterization (Standard 3d Rendering) and Voxels (similar to Polygons but using many many many dots in 3d space).

Take a look at these phenomenal videos which were released at the Conference. Pay particular attention to shadows, lights and reflections as well as things like the ‘feel’ of the material and the way it interacts with the light.

Proving it was real using a phone as a camera in 3d space. You can see how the camera moves in the 3d space with the phone movements, showing that this is being generated in real time.

What I want to accomplish

I’m quite fascinated by the technology and wanted to dig into the history and the alternatives, some of the theory behind it, and my reasoning it’s seeing a resurgence.

Mowr Video

Here is the video that showed at GDC recently demo’ing a lot of the technology and it’s use in DX12 DXR API.

https://www.youtube.com/watch?v=mgyJseJrkx8

History… Haven’t we heard about it before

A very long time ago before we had 3d accelerators there was a world of choice for game developers wanting to do something approximating our reality on the PC. It was with the surge of the FPS and Quake that saw the dawn of 3DFx and eventually OpenGL. Microsoft made a push into the same space with the initial versions of DirectX and Direct3d.

Quake was one of the first mainstream game I remember to do ‘real 3d’ using Polygon rasterizers, but there were other technologies which we lost along the way.

There were Voxels, using elementary 3d particles to build environments (kinda like Minecraft with smaller building blocks)

Outcast with it’s Voxel terrain rendering and it’s recent 3d rasterizer remake, 18 years later!

For Raytracing it’s been something more of an offline affair for getting a amazing lighting for 3d models which you can see from Autodesk.

Motorcycle, Engine, Raytracing, Render, Raytrace

But for real time ray tracing it’s been in the domain of Coding Demo competitions for a long time, showcasing the skills of up and coming programmers and being done on even the early Amiga’s like Jeff Atwood spoke about here.

And here is an amazing 64kb demo release in 2011 Exceed – Heaven Seven (Heaven 7).

The technology

Ray tracing uses the idea of ray casting, which involves shooting rays from a camera into a scene and evaluating what it hits in the scene, to determine what it looks like, these rays are affected by the lights casting into the scene as well.

The idea is that the more rays you shoot into a scene the higher a level of fidelity you achieve.

The increasing detail Illustrated by additional rays being cast at a 2d kettle from a tutorial on programming Ray tracing engines.

Since the rays rely on additional recursive rays being cast, you can also speed up the process by limiting the amount of rays you will additionally cast into scene from the object such as the depth, but doing this too heavily will result in losing a lot of the amazing lighting you get from ray tracing.

Here is a simple Javascript Raytracer illustrating the rays for scene depth.

Depth of 1
You can distinctly see the lack of detail with regards to the floor on the balls and the lack of any detail on the reflection of the ball on the plane below it

Depth of 2
With 2 levels of rays you can see the distinct checker board pattern more realistically displayed on the balls, as well as the detail on the floor.

Depth 0f 5
There is a lot more detail on the floor reflections as well as muted colours in the later reflections on the ball.

Depth of 50
Very minimal change in detail the only bif I can see is on the underside of the big ball on the floor plane has become a lot darker due to more dark shadow rays being cast into the scene from the big ball obscuring the light sources in later depth

The resurgence and the future ?

Accurate light and shadow representation in a scene has long been one of the tenants of graphic fidelity.

I can remember seeing the real time lighting in Doom 3 and barely being able to believe it was all running in real time on my own PC.

The comparisons made were always to animated movies by Pixar and when could we have that level of detail in our games. When movies like Toy Story were made they were taking hours rather than milliseconds to calculate frames.

I believe, if our computers have the power, it could be the next step for our graphics. It’s a much simpler way of representing the world and more closely mimics our reality and vision than rasterizers do.

There is a concern about building renderers which support current gen consoles and our current PC graphics cards, but then also supporting the new technology.

Thankfully, in a lot of instances, the final renderer has seemed to be hot swappable for Raytracing and Rasterization. Unity and Unreal engines are also great at supporting up and coming rendering technologies as they emerge.

Futuremark is on the action with their next version of 3dMark having a Raytracing benchmark built in.

DXR Raytracing on and off

Though the Siren demo didn’t specifically mention DXR, it’s also showing how the state of the art is moving forwards in so many other areas right now.

We are going to be spoilt for ways to be amazed in the near future.

I personally think there’s a bright future for the technology and it will get more main stream acceptance, as we ever push towards digitally representing our reality.

 

Slownet :-( , Fasternet? Raspinet!

The problem

My internet was under performing quite severly and I didn’t know why. Occasionally for periods of time from a few minutes to hours at a time our internet would become unusable.

First thing was isolating the problem . I installed all the monitoring tools I could to find to narrow down the problem to an application or device.

I managed to find the app causing all my issues, it was Google Photo‘s but there were no throttling options in the software to stop it.

Image result for google photos logo

I was honestly so surprised, but anytime a device in our house came home to land on the WiFi, we couldn’t browse the web until it was done uploading it’s photo’s.

The worst thing about the problem is that we love Google Photo’s and couldn’t live without. From the free uploads at a great quality to the AI on the image categorization and knowing all the photos I’m in with my favourite people.

Back to the cons! No internet for quite a substantial portion of time until videos and photos were uploaded snugly and safely to the web vault, was not something we wanted to live with.

What I wanted to solve

I wanted internet all the time, I didn’t want to manually have to throttle all connections on the network, install software on each device to do the same.

Image result for slow internet meme

I wanted something to automatically take care of an issue I couldn’t solve inside the Google Photo’s app. I had an idea that I could setup a device which would automatically share the bandwidth fairly between all our users and programs fairly.

The solution

The Raspberry Pi is an awesome piece of hardware, anytime I have a IT household problem to solve it usually has just what I need. From the earliest iteration it was a phenomenal piece of hardware in a bite size format and a teeny tiny price of just $35.

A brief investigation into the Raspberry Pi 3 showed that you can run it in a Access Point mode, and with the Ethernet wired into the network, the rest would be software.

Image result

The initial problem was to setup the Pi as a WiFi router, which is quite well documented on the web, I followed this guide.

This will help you setup a Wifi Access Point, DHCP and DNS and absolutely nothing was solved yet… no QoS.

Image result for qos logo

The last piece of the puzzle. I was actually quite surprised that a piece of software called IP Tables, which is normally used for directing traffic was actually able do quality management and throttling.

The IP Tables result was actually pretty great, but was quite complicated to setup, I modified a guide I found and … we now have the best internet we ever had at home, even though the line is quite slow, it’s never felt slow for us again.

A little bit more technical stuff

Using Traffic Control (tc) command you can setup discovery queues which allow traffic to be classified into classes.

tc qdisc add dev eth0 root handle 1: htb default 15
tc class add dev eth0 parent 1:1 classid 1:10 htb rate 80kbit ceil 80kbit prio 0

Inside the classes you can classify traffic filters, which will put your traffic into buckets.

Once you have your traffic classified into buckets you start prioritizing which buckets will get bandwidth allocation.

tc filter add dev eth0 parent 1:0 protocol ip prio 1 handle 1 fw classid 1:10

And you can also prioritize certain packets, normally ones that are critical but never actually use a lot of bandwidth, some like SSH which is text only for controlling servers and SYN, which is handshaking packets, giving these the highest priorities should never take away from your top download speed, but does give you much better response times on websites.

iptables -t mangle -I PREROUTING -p tcp -m tcp --tcp-flags SYN,RST,ACK SYN -j MARK --set-mark 0x1
iptables -t mangle -I PREROUTING -p tcp -m tcp --tcp-flags SYN,RST,ACK SYN -j RETURN

I did have a few issues getting it to start on boot, which was quite a pain due to all the commands, but did eventually get it to start up in init.d

Where to from here

I could have used some expensive off the shelf hardware to fix the problem, or buy faster internt, but it was actually great being able to take hardware I had around the house and fix a problem I was having.

This actually accomplished everything I wanted, but there were a few things that would have been nice to add. Since I had the WiFi router functioning on a full linux computer, getting some metrics off it would have been great.

  • View bandwidth usage by PC, Protocol, Country, Website.
  • Setup a user login and guest system for logins and turn this into an open hotspot, with only unused capacity being shared.
  • Thresholds for certain services, like limit Netflix automatically so it only streams at 720p and not 1080p.

Back to cat memes!

Image result for cat soul starve

Related imageImage result for high speed cat meme

The pixelated dawn of virtual reality

I have always been fascinated by Virtual Reality. It was always depicted as the future in media, the place we wanted to be but weren’t yet.

There was Tron, Lawnmower Man, Johnny Mnemonic, scenes in Hackers and so many others.

Games like Dactyl Nightmare that inspired me to believe it was the next level of reality that I had to be a part of, how could we live without this.

Tron poster.jpgLawnmower Man.jpg

Image result

Hackersposter.jpg

Then is just died overnight, no one talked about it anymore and it was gone from the zeitgeist for years, living in small niches of people building custom hardware.

Then one day it exploded again in the early 2010’s thanks to the first big Kickstarter I supported. Palmer Lucky had worked out that commodity mobile phone screens would be good enough for the displays and the rest is tech!

Developer kit for the Oculus Rift - the first truly immersive virtual reality headset for video games.

And now with the release of movies like Ready Player One it’s making it’s way back in.

Ready Player One (film).png

What I want to see ?

I want to investigate the technology and what makes it work and how far it’s come, then give some guesses on where it’s going.

Where is it now ?

Virtual reality has come a very far way in a very short time. From it’s renaissance in 2012.

The entry level experience is getting so much better with Google having stepping to that area with Cardboard. A great cheap experience to view photos and play some games. It uses your phone and some literal cardboard, great idea and great cheap introduction for everyone.

Image result for cardboard google

The middle range experience is slightly more involved is Google Daydream recently released for multiple vendors as well as Samsung Gear VR.

Image result for google daydream

Image result for samsung vr

And at the high end we have a lot of options as well. There is the Oculus Rift and HTC Vive and Playstation VR.

Image result for playstation vr

Image result for oculus rift

Image result for htc vive

Giving a very high quality image and the high end experience people always thought they wanted.

What is some of the technology used ?

Freznel lenses are one of the big technologies that came in to help. Prior to these special lenses and some software the only way to get great image quality was with a curved screen, which are very expensive. It allows software and the lenses to compensate for the display being so close to your eyes and giving more detail in the centre of the image where it’s usually needed.

Image result for fresnel lens

Accelerometer a sensor which allows the phone to know the direction and speed it’s moving in.

Magnometer an orientation sensor which can be used like a compass

Gyroscope another sensor which allows you to know orientation. This allows the device to know the direction it’s pointing.

All the above sensors were very expensive till mobile phones commoditized them making them commercially available and much cheaper.

Some of the other technologies involved was to be able to track a user to give them the ability to move around a room. All current devices do this using camera’s for outside in tracking.

And on the higher end devices you can get hand controllers as well to allow you to interact in virtual reality.

What’s coming up ?

No more wires, for all the high end hardware it’s either connected to a computer or a console via cables. There are plans in motion to turn this wireless .

HTC will be releasing their add on in the near future and other companies are promising that they will be able to power HDMI devices and send the signal wireless in a generic way.

Haptic feedback to allow you to feel things from the virtual environment, like resistence to putting your hard through something or walking into a branch from a tree causing pressure on your chest as you walk past it, there’s a few companies working on it.

Stand alone devices are on the horizon, like the mid range hardware but with all the hardware bundled into a single stand alone device. The Oculus Go is an upcoming example.

This would extend to then include high end feature like hand controllers, high resolution screens and movement tracking. Oculus is targeting this with the Santa Cruz 

The tracking on all the current gen headsets is Outside In, which requires additional hardware and sensor setup, the HTC Vive uses passive sensors, while the Oculus and PSVR use active sensors which require talking to the base hardware.  The next extension is to build all of this into the hardware itself, which would allow for greater freedom with the VR headset. This is called inside out tracking. Resulting in a lot more freedom.

But what can you do with it ?

Currently most use cases are gaming and there are quite a few big names getting in on it, trying to find out what works and what doesn’t.

Some of my favourite experiences were grounded in something I understood and extended to the unreal.

One of my favourite early experiences was the Rift Coaster, it was an amazing experience that evoked a real world reaction, Nausea!

No real user interaction, all you could do was look around but it made you feel like you were really there.

An early follow on for people who wanted to play something was Epic Dragon VR, this was a game where you flew on the back of a dragon and could look to direct it. It was an amazing experience being on the back of this living breathing dragon and getting it to do your bidding, such a simple mechanic, all you had to do was look where you wanted to go and the Dragon would take you there.

Image result for epic dragon vr

Fast forward a few years actually and people had tried to make a lot of First Person games, but nothing really worked, it all feels wrong and leaves you feeling ill afterwards.

Once we had room scale tracking there was the possibility of platform games and some teleportation locomotion techniques.

Two games that excelled at each of these were.

Space Pirate Trainer one of my all time favourite games in VR. This is just a superb start to finish short experience, it makes you feel like Starlord.

And Budget Cuts with it’s teleportation mechanic. The knife throwing to me was a pure touch of class.

This has expanded into some new experiences like Robo Recall by EPIC which I really do enjoy.

My one and only full price purchase so far has been The Climb by Crytek and I played this till my fingers bled, literally.

It’s not all games and some of the application I come back to are my Google 360 photo’s using the Cardboard Camera app. It gives you a sense of presence you just don’t get from a normal photo or video clip.

Where do I think it’s going ?

We’re still waiting on the big uptick, the moment where only for early adopters becomes everyone wants it and has to have it.

There is this feeling that companies can’t keep investing without getting the payoff they’re waiting for and expecting. It could all just fizzle like 3D Tv’s did in the home.

I personally feel like it might have to be productivity enhancement that catches on.

Imagine pulling pulling out a pair of VR glasses and going to work on their giant virtual screens with a virtual keyboard and virtual touch screen with some haptic feedback, all powered by your mobile phone in your pocket.

Oculus Dash is a early move in this direction.

One of the biggest pains I’ve had is the barrier to entry. Whether it’s needing to have a Gaming PC, or needing to have a 3m x 3m area to play games. Needing to put my phone into this clunky holder.

Always there’s these steps you need to do, in our day and age people need to be able to step into and out of something as easily as unlocking their phones, and space is at a higher premium that it ever was before.

The Oculus Go is meant to fix that one problem of a device always ready to go, which is great, but I believe tying that into something that will augment my reality to replace things, that use space I don’t have, will be where it starts adding real value into my life and starts being something that I cannot live without.

The creation of Lauren

For me, the best part of finishing a project is making people’s tedious problems into non-entities.

I believe that people’s jobs are generally encumbered with too much peripheral work to what they should actually be doing and focussing on.

I take great pleasure in making manual things go away, to free up a person to do the more interesting parts of their job.

So this project has a particular place in my heart… and GitHub.

Premise

As part of the their job a friend writes articles and publishes them on a custom CMS and they have to put a picture in mutliple formats into the CMS to allow it to display in many different forms.

Some of the formats are on the full page article, mini-preview, linking previews and sister-site integrations, each article may need up to 10 photos and each of these has to be cropped, centred and re-sized into their respective formats, meaning up to 40 seperate images that need to be created and was taking an hour plus each time they had to do it!

I felt certain I could help them spend their time better.

Research!

Whenever I want to start a project the first step is always reasearch. In this age of information when you set out to do something there is almost always someone who has done it already, so I looked to a lot of my favourite imaging tools to see how easily they could be setup to automatically create a bunch of formats each time they were needed.

I’d used XNConvert in the past and IrfanView with great sucess, but couldn’t find a way to easily automate them to this specific task that would be easy enough and portable enough to setup on any machine.

So the project was a go!

What I set out to accomplish

I had a couple goals when I set out to do this.

  • Solve the Problem : It must resize into the custom formats very quickly.
  • Don’t get overzealous : It mustn’t take me a lot of time to complete.
  • Keep it simple : It should be tiny and singularly focussed.

I thought of these as mantra’s during my development.

I decided to write a Windows desktop application, this was an easy choice as I wouldn’t have to host anything for them and they could locally have the application on their PC and transfer it as they desired.

Using C# and Visual Studio by Microsoft.

Solve the Problem

I wanted a very simple UI, drag and drop an image and get your copies.

There were a couple iterations of even this, but this is what I came up with in the end.

A simple header (I had to use a 80’s era logo for something in my lifetime).

I also wanted to highlight the chosen name, Lauren, as per a work collegues request. While I was happy to oblige I still wanted to be somehow related to the application. The chosen concatenation was Largely AUtomated REsiziNg tool.

With the two most important questions answered (what does it do and what do we call it) it was time we move on.

Don’t get overzealous

A brief break down, the top bar is for the different sizing requirements depending on CMS and tasks, so instead of creating twenty different types, we actually only create the ones pertinent to the specific task.

The bottom bar is for quickly setting the max size of quality of the picture.

These were the only two concessions I made to expanding requirements.

There is a choose file option, but dragging and dropping into the window is the expected usage.

There is no process button, we just process as you go.

Keep it simple (stupid)

I do actually try to follow this principle when I do anything.

It’s tightly coupled to a favourite saying.

Keeping this project small and singularly focussed managed to get me to the finish line fast with the best ROI (Return on Investment) for the work I was doing, which was in my spare time and to help a friend. The ROI was seeing a happy friend with a tedious task removed from their life.

Fun stuff ?!?!

My original approach was to naively resize everything regardless of the aspect ratio changing.

Surprisingly to me, this resulted in a really terrible output.

I quickly changed to using a resize and crop method which didn’t give the best result.

The image above illustrated the centred focal points in each photo which is how the resize and crop where happening.

I then changed this to allow the user to select a focal point if they wished and defaulted it to the centre. This avoided having to go to an external tool again.

With the focal point selected you get much higher quality output.

With the resize and crop you see with similar aspect ratios’ we get a similar photos, but with the long upright photo’s we get a better result.

A couple things I learnt.

  • Resize it to get the one axis right.
  • Then crop to get the sizing.
  • Maintain image quality at a certain level.
  • Sometimes you may have to zoom in to get the first axis right.
  • Some CMS’s have file size limits as well.

This resulted in a very good image which my friend was happy with.

To get the files into the users hand with minimal fuss, I create new files in the same directory as the created file but with the size of the file appended onto the filename.

Finished product

I put the single exe in a folder on Google Drive and shared the link with the friend, after a couple firewall mistarts I did get it to them.

Once they had the application, we went through a couple iterations.

Initially I created a folder with the file name and stored the files in there, they found it easier to have it in the same folder.

After that it was mostly around the sizes required, so I added xml config, for them to control their own destiny.

And finally it was about the different tasks, so I added 4 xml profiles.

They have been very happy with the program and even shared it with collegues to help them do their jobs quicker and more effeciently.

Where to from here

Sometimes a project should be short lived, but there are a couple things I would add if I worked on it in the future.

  • The resizing and crop algorithm could be improved, but it’s Good Enough ™ right now.
  • The UI could give more feedback and be multi-threaded.
  • The code should follow best practices and have tests built in.
  • The profiles should be configurable in App and you shouldn’t be limited.
  • Conext Menu Integration in Windows.
  • It should be multi platform.

I’m not sure if I’ll ever do any of the above, but I am still very happy to have invested the time I have so far into this tool.

Have a look at the source in Github, if you want and give me a shout if you’d like to talk about it 🙂

https://github.com/tonym128/lauren