I find his path inspiring. It makes me want to create and do things, and shows that many things that are thought to be impossible, are actually achievable. There is a whole realm of opportunities waiting to be grabbed, and his life demonstrates just that.
There is a cost though. If it were that easy, everyone would do it.
The book goes deeper into the unique aspects that allow Musk to be an extreme overachiever, but also the personal and professional costs those same aspects inflict.
Musk’s capacity to take pain, risk, stress and adversity is staggering. I’ve been reading several biographies over the past years and met quite a few people throughout my life. I can only find a handful of them that are even comparable.
The relentless pursuit, sometimes maniacal, of product quality, functional and non-wasteful design, and effective manufacturing. All in the name of a greater objective, larger than any single human being or corporation.
The book dives into the relationships that shaped Musk (for the better, for the worse, or both), the types of relationships he consistently attracted (which resonated with his personality and upbringing), and essential relationships that allowed him and his projects to thrive.
They come to further prove that no man or woman is an island. You can’t succeed by your own.
Consistently throughout Musk’s life, there is a balance between dark and light, between lighthearted comedy and downright hell.
I’ve had a good laugh with some of the humorous sides though, such as when eels and hovercrafts were catered to one of his marriages, or when Blue Origin filed a formal protest against SpaceX to prevent the company from having exclusive use of a NASA launchpad, to which Musk replied that “if they do somehow show up in the next 5 years with a vehicle qualified to NASA’s human rating standards that can dock with the Space Station, which is what Pad 39A is meant to do, we will gladly accommodate their needs. Frankly, I think we are more likely to discover unicorns dancing in the flame duct.”
Shortly after, a SpaceX employee bought dozens of inflatable unicorns and took a picture of them standing in a flame duct 🦄
“The Algorithm”, is a distillation of lessons learned while relentlessly increasing production capacity, which Musk repeatedly preached on his enterprises:
It can be summarized into this: “The only rules are the ones dictated by the laws of physics. Everything else is a recommendation.”. It’s a recurrent theme.
Doesn’t come without a cost though: it is often associated with conflict and chaos.
“Delete, delete, delete”
Optimization many times goes hand in hand with deletion. Deleting is hard. It requires letting go of past achievements, comfort, and acceptance that something is not coming back. Almost like a breakup.
Goes hand in hand with deletion.
This is where intensity and being “hard core” are most leveraged, in my view. Again, with their pros and cons.
A more obvious one. Put a machine to do it. Easier said than done though.
I’m fond of biographies because they provide several data points that I can later use in my life. This is, which set of actions preceded a given consequence. Of course the context matters, and surely no same action will lead to the same consequence, specially if made by different people.
But patterns start to emerge, and their respective probabilities. These help me solidify my own personal theories and strategies. Learning from others is important. It allows for “shortcuts” similar to the ones provided by good mentors.
To build these, I need data points, a lot of them.
That is why I like Isaacson’s biographies so much 2. This is this third book I read (or hear) from Walter Isaacson, and one of the things I enjoy most about his writing is how deep he goes into the details of someone’s story. I believe that details are important when portraying someone’s life. Sometimes, the smallest of events make an immense difference over someone’s path.
Love him, or hate him, I recommend Musk’s biography to just about anyone.
This link is an affiliate link, and as an Amazon Associate I earn from qualifying purchases whose commissions help this small establishment, at no additional cost to you↩
Except Steve Jobs Biography 1, which I could not bear to finish. Jobs personality was a bit too much for me to handle↩
In this post, a simple framework is proposed to classify a goal using three axes, in order to inform the approach taken towards reaching that goal. The three axes are:
An external entity (let’s say, government or partner company), has made a set of requirements for which your team must provide compliance for on a fixed, non-negotiable timeline, with limited team capacity. The “definition of done” is well defined by this entity, and there is a low amount of unknowns of how to reach the goals.
This goal has sensitive timelines and is archetypal top-down. The deadline will be reached in one way or the other, so all hands are needed on deck to tackle this goal. In specific:
Your team supports a user flow, such as a check-out flow, for which the goal is the increase the conversion amount by X. There are known levers you have available to drive this metric, but also many others which are unbeknownst and would need experimentation and research. This increase is not business critical, so there is an underlying lenience in case the goal is not hit.
Unlike Scenario 1, which is exploitation heavy, this scenario favors an approach which is exploration heavy, due to the high amount of uncertainty of which projects will be able to move the needle towards the successful completion of the goal. To de-risk and address this goal, a high level of adaptability will be needed, and several opportunities will need to be created to discover which levers can be used. In specific:
Your team is tasked to unify several internal systems into a single one, where several dependencies between different projects exist, and the time each project will take ranges from certain to unknown.
The focus here should be on making clear which are the dependencies between different projects, and allow some leniency towards the completion of each of the projects, while actively managing the team’s capacity to focus on the projects which are discovered to require a heavier lift. In specific:
The framework above is quite simple and could potentially be used for several other different use cases. It might also be missing some essential parameters that are relevant for your specific context. Let me know if you find these, I would be happy to review the framework accordingly. Regardless, hope the above can be useful on your next roadmapping / planning cycle!
]]>One way to keep these writings alive, would be to assign a third person with the task of keeping this establishment online. What happens when that person is gone though? It’s turtles all the way down.
Another way would be to find a self sustainable digital mechanism to keep this blog alive, ie, a bot. That has its own issues as well, since technologies change and distributed systems like IPFS come and go.
The other option to extend the lifespan of these digital contents is to have these writings persisted into a physical medium. A book. No maintenance required. As long as it can be kept safe from unfortunate book burning events, the probability that a single copy remains alive, and can be made accessible (such as a public library) to someone and hopefully improve their life, increases.
The jump from a humble blog to a book is quite a leap. Outrageous even. Perhaps doable.
Some of the books I’ve recently read gave me some guidance and ideas on how to take that germinal idea into fruition (note that the following links are affiliate links, and as an Amazon Associate I earn from qualifying purchases whose commissions help this small establishment, at no additional cost to you):
Apart from finding the above points inspirational, they also make me think that creating a discipline where I consistently share my reflections and experiences via writing or videos, not only helps me solidify my thoughts and provides more immediate feedback on how valuable these are to its viewers / readers (which allows me to course correct and iterate 2), but also provides the building blocks to put together a hopefully meaningful and concise book to help surpass the limited lifespan of the digital medium that hosts this blog and my YouTube videos.
No Input, No Output
― Joe Strummer, lead singer of The Clash
Given the above, here are some action points to put the wheels in motion towards building “physical persistence” via a book, and give better chances for its memes to transcend their host 3:
Looking forward to 2024! 🎉
On the extensive range of writings that Benjamin Franklin did, there are also hilarious examples such as Fart Proudly, an essay about flatulence written while he was living abroad as United States Ambassador to France↩
One the learnings from making the Survival Ball video game, was that the lack of exposure throughout it’s development process lead to less visibility to the ones who might be interested in it, and less immediate feedback on what worked and what didn’t work, which could have possibly have helped to craft an better product, by virtue of allowing faster / less painful pivots↩
Coined by the British evolutionist Richard Dawkins in his book The Selfish Gene (1976), a meme is a unit of culture—such as “tunes, ideas, catch‐phrases, clothes fashions, ways of making pots or building arches.” In humans, memes have supposedly taken over much of the evolutionary burden of the traditional units of heredity, the genes. Dawkins introduces them because in his opinion the rate of human cultural evolution is far too rapid to be simply a function of gene‐centered evolution.↩
I was recently in NYC. It was one of the most interesting experiences I’ve ever had.
Each one has their own perspectives when visiting the city, and I’ve asked several people about theirs before the trip, which helped immensely when planning out my itinerary. Now it’s my time to pay it forward. Here is my guide / personal perspective of New York.
Having a pre-curated list of points of interest on Google Maps was one of the most useful and time efficient aspects of the entire trip. This allowed me to quickly improvise and make the most of my surroundings, since I could just look at the map to check which nearby places I could visit at any given time.
I didn’t consider myself to be an excessive photographer. I’ve discovered on this trip that I was probably wrong. I can’t recall the last time I took so many photos, it was like I was in an outstanding candy store of visual goodies.
Let’s start with fashion. I’ve been pointed out that my glasses make me look nerdy / square / too serious / too much like an engineer. Well, thanks to this museum piece, I can now say they were developed for space walks:
Oh, and look back and you will also see the Space Shuttle Enterprise, the first orbiter of the Space Shuttle system:
You can find these glasses, the space shuttle, sit inside the Mercury capsule or A-6 Intruder cockpit, see real life jet planes like the Lockheed A-12, in the Intrepid museum.
Where is the Intrepid museum? The museum is an World War II–era aircraft carrier. Yes, the vessel is the museum. You can explore several of it’s interior sections, including the bridge, living quarters, gun and bomb bays, go through the same narrow escalator that pilots took to the flight deck, all of these throughout several floors and different explorable rooms
Right beside the Intrepid lies USS Growler, a diesel powered submarine retrofitted to deliver Regulus nuclear missiles, as a form of nuclear deterrence during Cold War. You can go inside the submarine and get an intimate experience of how it’s lived throughout their 2 month patrols, where they faced the constant scenario of being called upon to launch a nuclear missile onto Russian soil, destroying not only their target, but most likely kill themselves on the process, since the process to launching / reloading Regulus missiles was long and would expose this submarine to enemy reconnaissance.
I found the underlying story and details of this submarine and this mission to be so interesting, that right after the tour, I went immediately in search of staff members to ask a series of questions that came about during the visit. They very kindly and patiently took the time to answer them, and one of the ladies even asked me if I was an engineer. I’m not sure what gave that away. It was either the questions, or my glasses :)
MET, or Metropolitan Museum of Art, has two locations: the museum in Fifth Avenue is the most famous one and is often referred to when mentioning MET. The other lesser known branch is in Upper Manhattan, the MET Cloisters.
One of the last books I’ve listen on audiobook was Walter Isaacson’s Benjamin Franklin biography, where one of passages goes through the story of how a Duplessis’ painting of Franklyn’s portrait came to be. This 1778 painting can be seen in the museum, and curiously enough, Franklyn’s depiction on $100 bills from 1914 to 1990 had him wearing a fur coat, just like this painting.
With the same ticket, one can visit the Cloisters and the 5th Avenue museum on the same day. The Cloisters museum is America’s only museum dedicated exclusively to the art and architecture of the Middle Ages, and is smaller than the 5th Avenue one. Still there much to explore not only inside the museum, but also on its surroundings
Surrounding the Cloisters is the beautiful Fort Tryon Park and it’s Heather Garden, sitting right next to the Hudson Riven. A bliss to behold.
Another thought that stuck to me is how deeply connected we are to other animals, Earth and outer space. Seeing life sized representations of several of these makes them immediately more relatable.
On Saturdays, from 5 to 8 pm, admission to the museum is “Pay What You Wish”, for a minimum of $1. I would recommend doing so, since the museum can be easily visited in less than one hour, and in my opinion, it’s more about the building than the pieces themselves
Broadway shows are a hallmark of NYC, although if you are from London or Europe, I would recommend not seeing shows that are already available in London. For example, I’ve heard that Hamilton is better seen in London, due to the larger contextual part that King George has in the UK.
The Chicago musical is a good bet, and is the one I’ve attended to, it is the second longest-running show ever to run on Broadway, behind only The Phantom of the Opera
Comedy Cellar is a comedy club in Manhattan where many top New York comedians perform, and where several comedians started, like Louis C.K. or Dave Chappelle. I found it be have down to earth, edgy comedy, and you are asked to leave your phone in a pouch, which I believe to increase presence of the entire crowd, but also provides extra freedom to comedians, given to how powerful cancel culture can be
You can get the Staten Island Ferry from the Whitehall terminal, for free. On your way to the terminal, several people will try to sell and cajole you into payed trips to the Statue of Liberty trips. The ferry is more than enough.
Central Park is an urban park between the Upper West Side and Upper East Side neighborhoods of Manhattan in New York City that was the first landscaped park in the United States. It is the sixth-largest park in the city, and it’s a great way to step out of the busyness of the city and discover the treasures that lie in the park.
The High Line is a 1.45-mile-long elevated linear park, greenway and rail trail created on a former New York Central Railroad spur on the west side of Manhattan in New York City. If starting it from the north side, you can enter it near Hudson Yards, and walk your way south from there.
Upon the southern end of the High Line park, you can view Little Island, and quickly walk to it. Little Island is an artificial island park, with some beautiful views and interesting layout
The memorial is located at the World Trade Center site, the former location of the Twin Towers that were destroyed during the September 11 attacks. Each of the towers’ footprints are now the home of two large, recessed pools. The sheer dimension of these is awestrucking. The entire site is filled with symbolism.
Walking west from Central Harlem, where you can find the Apollo Theatre for example, we reach Morningside Park, from which we can climb up towards Columbia university, a private Ivy League research university in New York City.
Due to material changes during construction, the building as initially completed was structurally unsound. To save money, Bethlehem Steel changed the plans in 1974 to use bolted joints, and wind loads were calculated from perpendicular winds, as required under the building code; in typical buildings, loads from quartering winds at the corners would be less.
In June 1978, after an inquiry from an engineering student, the structural engineer recalculated the wind loads on the building with quartering winds, and found these to significantly increase the load at the bolted joints, and that a wind capable of toppling Citicorp Center would occur every 55 years on average.
Starting in August 1978, construction crews covertly fixed the issue, and six weeks into the work, a major storm was off Cape Hatteras and heading for New York. The reinforcement was only half-finished, with New York City hours away from emergency evacuation. The storm eventually turned eastward and veered out to sea. The repairs were finished successfully, and no major issues happened ever since.
Completed in 1912, the Woolworth Building was the tallest building in the world from 1913 to 1930, with a height of 241 meters. It blows my mind that such a tall building was built more than 100 years ago.
Home of one of the most expensive apartments in the world, the Billionaires’ Row hosts a group of ultra-luxury residential skyscrapers. Several of them are so thin that it’s hard to believe how they can stand upright without toppling over.
The first picture on this note was taken at the “Joker Stairs”, which is the colloquial name for a step street connecting Shakespeare and Anderson avenues at West 167th Street in the Highbridge neighborhood. The stairs are quite different from when Joaquin Phoenix danced as the Joker. They are more colorful, and have a construction site ongoing at the top.
DJ Kool Herc is credited with helping to start hip hop and rap music at a house concert at 1520 Sedgwick Avenue, which is currently covered by scaffolds as seen here. I would recommend to go during daylight hours and to be attentive to your route, as it gets rougher on the way there.
The stadium is the home field for the New York Yankees and New York City FC, and also has some small surrounding parks.
While the Brooklyn Bridge stands as the most famous bridge to cross the East River, you can easily dodge its flocking crowds by crossing the Williamsburg Bridge, which leads you into Williamsbridge, characterized by a contemporary art scene, hipster culture, and vibrant nightlife that has projected its image internationally as a “Little Berlin”
You can find several interesting places in Williamsburg such as the Domino Park, Spoonbill & Sugartown Books and the City Reliquary Museum.
The Brooklyn Bridge is an iconic landmark of NYC, which you can cross from Manhattan towards Brooklyn. Be advised that this crossing is often quite crowded, and there are several vendors in the bridge itself
Once you cross the Brooklyn Bridge, you can head directly to Dumbo, where you have a beautiful view to the Manhattan Bridge
You can also have several scenic views towards of the Brooklyn Bridge and Manhattan, in both Dumbo and Brooklyn Heights
Prospect Park is an expansive and peaceful urban park with beautiful landmarks and structures, such as the Boathouse on the Lullwater, several watercourses, bridges, monuments and statues
The Rockefeller Center Christmas Tree has been a yearly tradition ever since 1931, which now hosts a 20m+ Norway spruce, near to an also historical skating rink, which was opened below the tree in the plaza in 1936. This year, you’ll be able to see the tree up until January 13th 2024.
Just a few blocks north from the above places lies the Trump Tower, where Donald Trump descended on an escalator to announce his candidacy for president, the first step on a journey few believed would take him all the way to the White House.
Ever since I was a kid I had a considerable fascination with the USA. The culture that arrived through the small and big screens, the big companies and entrepreneurs that I admired, the language, the people I’ve met along the way, the stories. Having had the opportunity to visit and experience it first hand was a true privilege. I’ve lived it like it was my last time, and it imprinted in me several thoughts and perspectives that I’m sure to last a lifetime.
New York ain’t no DisneyLand, and Elmo’s pictures in Times Square don’t come for free either, so set expectations accordingly. It can be one of the most fulfilling experiences you’ll ever have. I advise you to come prepared and plan accordingly.
Frank Sinatra sang that he wants “to wake up in the city that doesn’t sleep”. That phrase has a deep meaning that only struck me during my trip. The city does not indeed sleep. The Times Square lights, 24 hour subway, the constant movement. Living in NYC wakes you up, not only literally, but also figuratively. It wakes you up to a different reality that keeps you on your toes in several ways. In a way, it’s a celebration of life.
]]>I didn’t use these because the writing process comes as a form of thinking, but if there are automated mechanisms that increasingly outperform us humans in so many different areas, what does that leave us?
How comfortable would you be in having a relationship with a being that is orders of magnitude more intelligent and capable than you? Not just logical-mathematical intelligence, but emotional, linguistic, musical or even spatial intelligence.
A being so advanced that any form of communication or relationship between you and it would always be severely constrained, even with neural interfaces. A being so advanced that you would never be able to have an essential understanding of it. A being so advanced that you wouldn’t even know that you are being manipulated by it. How many humans do you know who are friends with chimpanzees or bonobos, our closest relatives?
We are still human beings, with all the perks and limitations that come with it.
I would claim that humans will always be attracted to their familiar counterparts. Other beings that share their own struggles and limitations, and by consequence, what they create and share. Machines have their important place in the world, and history has proven to us that life as whole improved as technology evolved to serve us, but it would be healthier for us to think of them as tools, rather than human replacements.
We have been fitted with extraordinary senses and tools that allow us to efficiently interact with the world. It took milenea for nature to shape us, and as Leonardo da Vinci puts it when comparing natural to artificial structures: “Nature made the most perfect inventions. There is nothing superfluous, there is nothing lacking” (paraphrased)
Try having a machine fixing the leaky pipe in your bathroom, or to change your desk’s lightbulb. Odds are that you’ll be flooded and in the dark in the nearterm. There will come a point where they will likely do that, but we are still not there.
Machines / tools currently outperform us in several tasks on the digital realm, and in several heavy duty physical tasks. But when it comes to fine digital dexterity, human connection, and interaction with the real world and its complexities, we still have an edge.
Take a look at the daily tasks in your job. How repetitive are they? If they are repetíve, how much of a human element is present? Look at your organization and see where rubber meets the road in terms of human connection and physical intervention.
Strive to be closer to that human interface and focus on delivering tangible value to people or businesses (group of people). With this simple mindset, I believe you’ll be in a safer place, surrounded with an abundant supply of meaningful (monetary, emotional, etc) value that you can extract and provide.
]]>The above is a severely gross simplification1, but the key point is that an investment left unused has a high associated cost. In the case above, if only two videos were to be produced, then it would have been a better idea to rent the equipment instead.
This simple heuristic of mapping the investment cost to its usage and upside, can be a potential rule of thumb in different scenarios.
We could go on with further examples, as this simple heuristic could be applied to several scenarios, and the cost cost benefit analysis could have any perspective one so desires, but I would invite you to visit your storage and search for something you acquired long ago that got used less times than expected, and do a rough calculation of what was it’s relative cost / benefit. The same for any other investment you did in the past.
Was it a surprising result?
How did it stack up against your initial expectations?
This gross simplification is meant to keep the example simple, and does not take into account other direct and indirect costs, such as the computer needed to edit the videos, electricity costs, office space costs, labor costs. It also does not take into account the gained equity, this is, you could sell the equipment and get some of that investment back. This was not considered because if the equipment breaks its value is rendered to zero, and when / if it is sold, it would be for a fraction at the initial value, not only because its inherent value would likely decrease through the years, but also because of inflation.↩
If the project/task was decided to be delegated, what comes next?
One common challenge I’ve confronted when delegating was how unpredictable and uncontrollable the results felt upon handing over the work. This unstructured approach often left me thinking how I could decrease the non-completion risk by diminishing the delegated scope, and created confusion on the delegatee side on how they could proceed.
One tidbit of information from the “Who not How” book provided the guidance and structure to unlock this issue:
(…) research has found that teams who have high levels of autonomy but low clarity goals, as well as little performance feedback, perform worse than teams with low autonomy.
It is claimed in the above research that teams which have high autonomy, high goal clarity and regular feedback on their results, were observed to have their performance shooting through the roof.
Apart from regular feedback, goal clarity is incredibly important, and even though it might sound obvious, it can be overlooked and drowned out on the process of delegation.
After the above realization, I’ve made it a priority for each piece of delegated work to clearly define what was expected as an outcome, its definition of done, important milestones and deadlines, when should an issue be scaled to me, and which are the ownerships for each of the participants.
All of the above communicated in a clear and concise way, and preferably publicly documented to increase accountability and decrease ambiguity (depending on the situation, merely communicating these verbally might be enough, especially if the task is simple).
Given the above, the proposed formula for delegation is:
Delegation = [ 75/25 rule -> Goal Clarity + Autonomy + Regular Feedback ]
]]>Upon your exit, your contributions, actions, decisions, opinions and presence most likely will have affected the trajectories of the people and the world around you, in obvious and less obvious ways. Perhaps your own trajectory was deeply changed as well.
These exits may be celebrated, be a special day with fulfilling words and thoughts. They may be the last time you see a set of people. The last time you will see them in your entire life.
Perhaps upon your final exit, you have made such consequential dents on the world that your name will be engraved into the annals of history for centuries to come. On the other hand, contemporary personalities such Steve Jobs, Bill Gates, Elon Musk and Warren Buffet might be mostly forgotten a few centuries from now. People will move on, so most likely you should be grateful if your loved ones keep you in their thoughts for those ensuing decades.
Life has several exits. Life is also short and limited. There’ll be a finite number of exits you will experience. Make the most out of each period they enclose.
]]>Given that lengthy lists are unachievable, they need to be slimmed down to a manageable and realistic size. They need to be prioritized. Prioritization is an euphemism for sacrificing.
It’s easy to want something, it’s hard to choose it against another desirable one.
At the start of each year, I produce a small list of 3 to 4 high priority goals and 1 optional low priority goal of what I want to achieve by end of it, along with a small description on why I want them, and their impact. This takes several hours, sometimes days to put together, because of all the other ones that need to be left out.
Once the list is made though, it’s incredibly powerful due to it’s easy recall. This means that whenever I am faced with conflicting preferences, I can easily remember it and know in a snap what is the right thing to do. That is the real power behind that list. It’s guiding all the tiny small steps and decisions happening throughout the year, bringing me progressivily closer to those goals.
The real question behind defining knowing what we want is actually what we are willing to sacrifice. Once the latter is defined and we are 100% convinced about it, then knowing what we want is bullet proof and easy to follow through.
What are you willing to sacrifice?
]]>Writing emails, reports, school homework, analysis, code, summaries. The list goes on.
Save from when you would delegate this task to another fellow human, long hours would be passed in the past trying to conjure a piece of text that could communicate something to another entity. Now this can be made at scale, at low cost, with low effort.
Since we are still liable for the results of this tool, i.e. it’s still our name on the email sender, instead of just sending that email, we might first read that LLM output, interpret it, understand it, and then send it.
Text is a form of communication. If something, or someone wrote it for us, certain decisions were made along the way to convey the goal that we gave. Out of the many paths possible to crystallize that piece of knowledge into a piece text, one of them was chosen.
I would claim that something gets lost on that delegation.
The writing process is more than just the production of text. Many times it requires the exploration of different perspectives, thinking deeply and coming to terms that we don’t know enough about a subject and need to learn more about it.
For example, it’s essential for me to have a notebook at hand to take notes during meetings and formal discussions. I write phrases, loose words, make small diagrams, jot down some reminders. Some of them are never to be re-read again, others I revisit to structure them down into a concise structure. Most of all, they help me think about a problem.
Same holds for notes and articles. I start with a cloud of loosely related ideas, which I attempt to refine into a structured form. Similar to the double diamond process.
Same for books. Several times I’ve come to terms that I learned close to nothing about a book read one month before. Or conversations. Or movies. Or experiences.
This is, except if I reflected or acted about them. Except if I wrote down my conclusions about them.
]]>For me, taking notes helps make sure that I’m really thinking hard about what’s in there. If I disagree with the book, sometimes it takes a long time to read the books because I’m writing so much in the margin
Time is the most valuable thing a man can spend.
― Theophrastus
There are countless articles, books, videos and lectures about time management. Since all of us have different needs, goals and environments, it is best to face them as guidances, where we take the ones we need, and leave the others in our toolbelt for another time.
One of these methodologies is the Pomodoro Technique, in which a kitchen timer is used to break work into intervals, typically 25 minutes in length, separated by short breaks.
This doesn’t work for me, so here is my Custom Pomodoro Technique.
I usually need long bouts of deep focus during my work or any complex tasks. This can be reading / writing code, aligning with partners, structuring plans, going through my finances, etc. 25 minutes is typically not enough, since it coincides with where most of my flow peaks happen.
One hour is my target for continuous focused work. Below that, I know that I can still take advantage of the remaining time to productively go through a problem. Above that, I start to prepare to wind down and take a break.
Rule of thumb for breaks are to be enough to feel rested, which tends to happen in less than 15 minutes.
I’ve tried several tools to track time. Physical and digital. Complex and simple. One has stuck for several years, and is the only one I use right now: Klokki
Klokki is a concise Mac application that tracks time and generates nice time reports. It also allows you to track your time automatically, but I don’t use that feature.
This is a wonderful tool that fulfills my main requirements:
Whenever I sit down at the computer and start working, I start the timer. I converged into using a single category to track all my deep flow blocks, since having multiple categories created too much mental overhead
Whenever the timer is on, I’m in productivity / work mode, and no distractions outside my goals are allowed.
Refrain from stopping the flow session before the one hour mark. Start preparing to stop the flow session after the 1 hour mark.
Take breaks in-between flow sessions, until I feel rested. Normally less than 15 minutes.
When the timer is off, scattered tasks, exploration and wandering are allowed and encouraged.
This is an incredibly simple methodology that I’ve been using throughout the past years. The recurrent benefits I’ve captured from it were:
Feel free to use this guidance / methodology and experiment with it, Or not 🙏
]]>One strategy that consistently worked for me has been using documentation to checkpoint each step of a process. Just like in a videogame where the last saved checkpoint can be recalled.
Similar to a CPU, our short-term memory (STM) has access to a limited amount (around four or seven concepts have been suggested 2) of very quick memory. Everytime we change to a different task, most of the resident concepts need to be flushed in order to load the ones from the incoming task. This is quite taxing by itself and narrowly optimizable, since our wetware is mostly fixed. The biggest efficiency opportunities lie in where these concepts are loaded from.
When working on a task, often we grab several simultaneous concepts, combine them together and generate new ones, which are then added to the concept pool and built upon further. During this process we are faced with these options:
Option 2. works well, but requires significant cognitive effort and often only keeps the most relevant conclusions. If for some reason these need to be re-considered or understood why they were generated in the first place, we backtrack into the seminal thought processes, which in turn will probably require a repetition of previous work.
This is one of the biggest offenders when task switching, because even if we recall the end state of a task (which is hard by itself), a lot of time and energy are spent when revisiting the overall context via backtracking.
Enter Option 3.: Progressively store these existing concepts into an external memory bank, such as a document or piece of paper. Although writing in an external memory bank the contents of our working memory can take some effort, the yielded results greatly justify the initial investment:
On the other hand, if you spend your whole day sitting, or typing, always satisfying your hunger, your AMPK and mTOR pathways (which are master regulators of cell metabolism) and sirtuins say: “Hey, times are good. Let’s just grow tissue, go forth, multiply and not build a sustainable body in the long run.”
The idolized unhurdled, stable and comfortable life, the easy access to resources and high caloric food, can do more harm than good when left unattended.
I’ve consumed gargantuous amounts of television throughout my youth, so I’ve grown familiar with it’s charming alure. In an almost frictionless experience, where it suffices to lean back and pick one of the channels or series/movies/documentaries readily suggested by a streaming provider, an experience is entered where carefully crafted packages and passionate stories are delivered to us, requiring only our attention. It can even serve as the third-person to liven up the room, when in a tedious moment with a loved one.
No real struggle, no real effort required. A passive experience. The body rejoices in it, like it does when ingesting a sugar packed sweet. Just like most, I enjoy that feeling. Just like most, I have a limited amount of willpower. Hence why I don’t have a TV at home. Neither do I have sweets.
I love it. I don’t love the fact that I’m blocking myself away from those satisfactions. No, I relish the long term results. The time it opens up to actively read, write (this post for example), create, think, be present with others, and even watch podcasts (of which I make an effort to pick an episode outside the normal recommendations). I’m not perfect, and the above serves as a rule for me. And rules do have exceptions, and that’s ok.
Still, this is the case for not having a TV.
]]>This is a new space where you will be able to find essays, notes and other quick thoughts, whereas the existing Blog section will remain allocated to denser article pieces.
I have a rule when buying non-essential items for my house: if I’ve been recurrently thinking about buying something over a span of multiple days, then it’s a good purchasing signal. The thought process for writing articles for this blog is similar.
The downside of this approach is that the ideas and thoughts that don’t make it to a blog article are relegated into the realm of personal writings and/or conversations with others, but never exposed to the public. On the other hand, this very same thirst to share quick thoughts was recurrent by itself.
The yearning towards developing this concept was there, but a satisfactory hosting plaform was missing still:
The third option was to host them here, in Byte Tank, which was customized over several years and already had the workflow and components I strived for. But how to frame this new concept into the existing structure?
The answer came from Alin Panaitiu. On his website, Alin has a separate section for Notes, which makes perfect sense to me. Notes sets the expectation that they are not fully fledged articles, and that they cater towards volume, rather than density.
An idea is only worth for its implementation, and these were the last nudges that incentivized me to bring it to life:
This is an exciting new phase for this modest establishment, and I’m looking forward to populating this section with new content. Until then!
]]>Books are an immensely valuable source of condensed knowledge and insightful thoughts. Here is a selection of my favorite quotes, from books I’ve read throughout the past years:
Zero to One: Notes on Start Ups, or How to Build the Future (Masters, Blake;Thiel, Peter) ↩
Elon Musk: Tesla, SpaceX, and the Quest for a Fantastic Future (Ashlee Vance) ↩
Methods of Persuasion: How to Use Psychology to Influence Human Behavior (Kolenda, Nick) ↩
The 4-Hour Workweek , Escape the 9-5, Live Anywhere and Join the New Rich (Timothy Ferriss) ↩
It Doesn’t Have to Be Crazy at Work (Jason Fried; David Heinemeier Hansson) ↩
The Manager’s Path: A Guide for Tech Leaders Navigating Growth and Change (Camille Fournier) ↩
The Five Dysfunctions of a Team, Enhanced Edition: A Leadership Fable (J-B Lencioni Series) (Patrick M. Lencioni) ↩
The article’s thumbnail image background was created via Open AI’s DALL-E, with the prompt: “bright, lively and colorful epic light coming from piles of books, with pages flying, against white background, digital art”
]]>Life is growth. You grow or you die.
― Phil Knight, Shoe Dog
I’ve had the good fortune of having the opportunity to develop my software engineering work as a tech lead (which is not a fixed role), focusing more of my effort on solving sets of problems, as opposed to individual projects/tasks. While navigating this path, I’ve gathered a set of teachings from my mentors/peers/managers and other personal learnings and observations that I attempted to compile in a condensed list. Hopefully these can be useful in your own journey, just like other people’s learnings were to mine:
]]>The duty of a leader is to serve their people, not for the people to serve them
— Elon Musk (@elonmusk) February 11, 2022
Hacker News is my go-to source for relevant, interesting and constructive discussions on a wide range of topics. I usually consume it via Daemonology’s Hacker News Daily, to catch up on the most active topics in the community.
Daemonology’s Hacker News Daily presents the title, story link, Hacker News discussion link, and is optimized for desktop. I usually consult it on mobile, and when I am several days behind, I sweep through the archives and open several story/discussion links in separate mobile tabs, in order to triage the stories with a quick glance at their web page and their discussion’s top comments.
This project aims to ease that process, by generating a set of web pages presenting the best daily Hacker News for mobile or desktop, with screenshots and top comments for each story, while aiming to have a low footprint to the end user.
The basilar idea is to recurrently get the best Hacker News stories via its API; take screenshots of the web pages that these stories link to; and have this workflow executed via GitHub Actions.
In specific, these are the steps taken to generate the final web pages (illustrated by the above diagram):
1. Every day, a new Github Workflow is spawned, kickstarting the entire process.
2. Update History
days_history.dat
artifact is created in every run, which is a simple Python Pickle containing the story ids from the past days. These days are stored in form of a deque, in order to pop the older days as the history grows largerdays_history.dat
, which will be used in the next step below, and will also serve as a base for the next workflowdays_history.dat
artifact cannot not be found in the previous successful runs, it is rebuilt by parsing the Daemonology’s Hacker News Daily web page3. Create Day/Story Models
days_history.dat
, a list of hydrated days with their respective stories are built. These models will posteriorly provide all the information needed to create the final web page views
4. Create the generated
folder. This is where the generated web pages and screenshots will be placed, in order to be later deployed to GitHub pages
5. Gather Screenshots
generated
folder6. Generate Web Pages
7. The generated
folder is deployed to the gh-pages
branch, which is deployed as a GitHub page, making these generated contents publicly accessible
The full source code is accessible at https://github.com/lopespm/hackernews_daily and the generated website at https://lopespm.github.io/hackernews_daily, feel free to improve it or to leave some feedback.
]]>This design was implemented using Docker Compose1, and you can find the source code here: https://github.com/lopespm/autocomplete
The design has to accommodate a Google-like scale of about 5 billion daily searches, which translates to about 58 thousand queries per second. We can expect 20% of these searches to be unique, this is, 1 billion queries per day.
If we choose to index 1 billion queries, with 15 characters on average per query2 and 2 bytes per character (we will only support the english locale), then we will need about 30GB of storage to host these queries.
The main two APIs will be:
The two main sub-systems are:
This implementation uses off-the-shelf components like kafka (message broker), hadoop (map reduce and distributed file system), redis (distributed cache) and nginx (load balancing, gateway, reverse proxy), but also has custom services built in python, namely the trie distribution and building services. The trie data structure is custom made as well.
The backend services in this implementation are built to be self sustainable and don’t require much orchestration. For example, if an active backend host stops responding, it’s corresponding ephemeral znode registry eventually disappears, and another standby backend node takes its place by attempting to claim the position via an ephemeral sequential znode on zookeeper.
The data structure used by, and provided to the distributor is a trie, with each of its prefix nodes having a list of top phrases. The top phrases are referenced using the flyweight pattern, meaning that the string literal of a phrase is stored only once. Each prefix node has a list of top phrases, which are a list of references to string literals.
As we’ve seen before, we will need about 30GB to index 1 billion queries, which is about the memory we would need for the above mentioned trie to store 1 billion queries. Since we want to keep the trie in memory to enable fast lookup times for a given query, we are going to partition the trie into multiple tries, each one on a different machine. This relieves the memory load on any given machine.
For increased availability, the services hosting these tries will also have multiple replicas. For increased durability, the serialized version of the tries will be available in a distributed file system (HDFS), and these can be rebuilt via map reduce tasks in a predictable, deterministic way.
http://localhost:80/search?phrase="a user query"
http://assembler.collector-load-balancer:6000/collect-phrase?phrase="a user query"
http://assembler.collector:7000/collect-phrase?phrase="a user query"
assembler.kafka-connect
dumps the messages from the phrases topic into the /phrases/1_sink/phrases/{30 minute window timestamp}
5 folder 6TARGET_ID
is generated, according to the current time, for example TARGET_ID=20200807_1517
/phrases/1_sink/phrases/{30 minute window timestamp}
folders, and attributes a base weight for each of these (the more recent, the higher the base weight). This job will also sum up the weights for the same phrase in a given folder. The resulting files will be stored in the /phrases/2_with_weight/2_with_weight/{TARGET_ID}
HDFS folder/phrases/2_with_weight/2_with_weight/{TARGET_ID}
into /phrases/3_with_weight_merged/{TARGET_ID}
/phrases/4_with_weight_ordered/{TARGET_ID}
/phrases/assembler/last_built_target
is set to the TARGET_ID
/phrases/assembler/last_built_target
znode, builds a trie for each partition10, based on the /phrases/4_with_weight_ordered/{TARGET_ID}
file. For example, one trie may cover the prefixes until mod, another from mod to racke, and another from racke onwards.
/phrases/5_tries/{TARGET_ID}/{PARTITION}
HDFS file (e.g. /phrases/5_tries/20200807_1517/mod\|racke
), and the zookeeper znode /phrases/distributor/{TARGET_ID}/partitions/{PARTITION}/trie_data_hdfs_path
is set to the previously mentioned HDFS file path./phrases/distributor/next_target
to the TARGET_ID
/phrases/distributor/next_target
, detect its modification and create an ephemeral sequential znode, for each partition, one at a time, inside the /phrases/distributor/{TARGET_ID}/partitions/{PARTITION}/nodes/
znode. If the created znode is one of the first R znodes (R being the number of replica nodes per partition 11), proceed to the next step. Otherwise, remove the znode from this partition and try to join the next partition./phrases/5_tries/{TARGET_ID}/{PARTITION}
, and starts loading the trie into memory./phrases/distributor/{TARGET_ID}/partitions/{PARTITION}/nodes/{CREATED_ZNODE}
znode to the backend’s hostname./phrases/distributor/{TARGET_ID}/
sub-znodes (the TARGET_ID
is the one defined in /phrases/distributor/next_target
), and checks if all the nodes in all partitions are marked as ready
TARGET_ID
, the service, in a single transaction, changes the value of the /phrases/distributor/next_target
znode to empty, and sets the /phrases/distributor/current_target
znode to the new TARGET_ID
. With this single step, all of the standby backend nodes which were marked as ready will now be active, and will be used for the following Distributor requests.With the distributor’s backend nodes active and loaded with their respective tries, we can start serving top phrases requests for a given prefix:
http://localhost:80/top-phrases?prefix="some prefix"
http://distributor.load-balancer:5000/top-phrases?prefix="some prefix"
http://distributor.frontend:8000/top-phrases?prefix="some prefix"
TARGET_ID
from zookeeper (/phrases/distributor/{TARGET_ID}/partitions/
znode), and picks the one that matches the provided prefix/phrases/distributor/{TARGET_ID}/partitions/{PARTITION}/nodes/
znode, and gets its hostnamehttp://{BACKEND_HOSTNAME}:8001/top-phrases="some prefix"
Note: Execute the shell command docker exec -it zookeeper ./bin/zkCli.sh
while the system is running to explore the current zookeeper’s znodes.
TARGET_ID
TARGET_ID
TARGET_ID
{TARGET_ID}
- e.g. 20200728_2241
Note: Access http://localhost:9870/explorer.html
in your browse while the system is running to browse the current HDFS files and folders.
{TARGET_ID}
{TARGET_ID}
{TARGET_ID}
{TARGET_ID}
You can interact with the system by accessing http://localhost
in your browser. The search suggestions will be provided by the system as you write a query, and you can feed more queries/phrases into the system by submitting more searches.
You can get the full source code at https://github.com/lopespm/autocomplete. I would be happy to know your thoughts about this implementation and design.
Docker compose was used instead of a container orchestrator tool like Kubernetes or Docker Swarm, since the main objective of this implementation was to build and share a system in simple manner.↩
The average length of a search query was 2.4 terms, and the average word length in English language is 4.7 characters↩
Phrase and Query are used interchangeably in this article. Inside the system though, only the term Phrase is used.↩
In this implementation, only one instance of the broker is used, for clarity. However, for a large number of incoming requests it would be best to partition this topic along multiple instances (the messages would be partitioned according to the phrase key), in order to distribute the load.↩
/phrases/1_sink/phrases/{30 minute window timestamp} folder: For example, provided the messages A[time: 19h02m], B[time: 19h25m], C[time: 19h40m], the messages A and B would be placed into folder /phrases/1_sink/phrases/20200807_1900, and message C into folder /phrases/1_sink/phrases/20200807_1930↩
We could additionally pre-aggregate these messages into another topic (using Kafka Streams), before handing them to Hadoop↩
For clarity, the map reduce tasks are triggered manually in this implementation via make do_mapreduce_tasks, but in a production setting they could be triggered via cron job every 30 minutes for example.↩
An additional map reduce could be added to aggregate the /phrases/1_sink/phrases/ folders into larger timespan aggregations (e.g. 1-day, 5-week, 10-day, etc)↩
Configurable in assembler/hadoop/mapreduce-tasks/do_tasks.sh, by the variable MAX_NUMBER_OF_INPUT_FOLDERS↩
Partitions are defined in assembler/trie-builder/triebuilder.py↩
The number of replica nodes per partition is configured via the environment variable NUMBER_NODES_PER_PARTITION in docker-compose.yml↩
The distributed cache is disabled by default in this implementation, so that it is clearer for someone using this codebase for the first time to understand what is happening on each update/step. The distributed cache can be enabled via the environment variable DISTRIBUTED_CACHE_ENABLED in docker-compose.yml↩
This article/postmortem provides an in-depth look into the process of building Survival Ball, a Single / Local Co-Op physics-based game available on Steam for Windows and macOS. From prototype until showcase at Lisboa Games Week, passing by the related principles, design decisions, level creation process, tools and technical details.
About six years ago I started learning Unity in my spare time, going through several official and unofficial tutorials. One was a simple tutorial on how to create a wire chain through physics. After its completion, I added a simple platform and a sphere. When arrow keys were pressed the sphere was torqued, and on spacebar press a vertical force was added to sphere, which caused it to jump. I was at awe. Unity’s physics were accessible and felt natural out of the box. Not only that, but a rough working scene could be quickly setup by gluing some basic concepts.
An idea started to bubble up about around this simple project: staying as long as possible on top of a platform. Antagonists, materialized in the form of various hazards, were added in order to coerce (directly or indirectly) the player off the platform. Frequency and power of these hazards increased over time, raising the probability of reaching the level’s finishing state. From this starting point, more elements were added, such as interactable platform edge blocks, destroyable floor tiles, backdrops, user interface and basic sound effects.
Two versions resulted from this. The first ad supported version was available for Android mobile devices only, having one playable level in single-player mode. Afterwards, a new adless version was released for OUYA, Android TV and Kongregate, featuring a new versus mode.
I went to work on other projects after the prototype’s launch, but the feedback provided from gamers was kept in the back of my mind in the meanwhile. Several ideas eventually piled up for a new, improved version of the game, and about two years ago they were first brought to fruition. First at night after work, then full-time until completion.
Laying out a solid base was was essential in the beginning of the project, specially concept wise. Pre-production, if you will. Surviving on top of a platform was the root idea, but was there latitude for perpendicular ideas? Multiplayer mode? Versus or Cooperative? Online or local only? Powerups? Which player movements would be available, only jump? Progressive difficulty (the player does not choose the difficulty level) or bucket based difficulty (e.g. easy, normal, hard)? What will be the direction of visual style, audio, and code architecture? How many levels? What is the player’s motivation play these levels?
Less is more when it comes to the game’s root ideas, and surviving on top of a platform seemed like a good premise. This idea was dissected and questioned thoroughly during the initial phase of development, in order to make sure it was well sustained. Other ideas such as going as high as possible or objective based levels were considered, but set aside in the initial stage of the project. They were not completely scrapped, such that both survival/time based and objective based levels would later cohabitate throughout the game’s development.
Assuming survival as the most important root idea, a question came up: survive what? It could be either you against the world, or you against other player(s), or a combination of both. The latter two options would prompt the need of a multiplayer versus mode and/or player AI, and the former one would naturally accommodate a single player and/or cooperation mode.
Allowing for a local multiplayer mode would be something desirable, since I had a personal bias towards games that physically join a group of friends in the same place. Also, online multiplayer was set aside due to the added entropy and complexity associated with it.
As for cooperative (co-op) games, there is usually an increased inclusion factor due to the wealth of opportunities to strike a balance between experienced and less experienced players, translating to less chances of frustration caused by huge proficiency gaps.
Given the above points, solo and local co-op modes were chosen to be included in the game.
Again, having in mind that less is more, a progressive difficulty mode (where the player does not choose the difficulty level) was strived for, instead of a bucket based approach (e.g. easy, normal, hard). Choosing a difficulty level when starting a simple game like this one, forces the player to commit to a given difficulty level at start, and accepting that decision. The responsibility is on the player. Often happens to me that when I choose a difficulty level like “easy” or “normal”, I keep wondering throughout the game if I am really good at that game, or am I just being thrown softballs. Progressive difficulty resolves that problem immediately, but it falls on the developer shoulders to craft the game in such a way that all difficulty ranges are covered just right.
Progression would be 2-fold: each level would start in an accessible way, but would gradually get more difficult as time passes. On the other hand, each level would be more challenging than the last one during the campaign.
There was the need to have offensive manuevers against hazards, and to provide mid-air control to the player. Two new movements were added to the already existing jump and double jump movements: dash and stomp. Dash provided mid-air control and allowed the player to interact with the world’s objects and attack hazards. Stomp, inspired by Super Mario 64, allowed for button interaction, and mid-air stop and plunge for precise vertical landings.
With this set of movements, the player was able to gain full control over the ball. Easy to learn, hard to master. Additionally, it suppressed the need for aerial control via directional controls, which would be less physically coherent and could quench some of the fun of gaining control proficiency.
An assumption was made that powerups would be distributed randomly, either by items scattered throughout the level or by other means. Powerups were not used because of these four reasons:
Defining the game’s visual appearance early on was quite important, since it would influence the reasoning behind 3D models, textures, UI, colors, mood and even audio. The two most influential references were Material Design (first version) and Lara Croft GO.
Coming from an Android development background, I was exposed to Material Design, which is a visual language that synthesizes the classic principles of good design. Its principles go beyond mobile UI development, and span to a swath of different form factors and use cases. I personally found it beautiful and applyed its principles throughout the game’s aesthetics, both in UI, but also in the overall style of the game. This influenced the game to have a simple, clean and coherent style.
Lara Croft GO is a beautiful game. The charming colors and elegant (yet simple) style made an impression on me. While building up the game’s aesthetics, I consulted several screenshots of Lara Croft GO and attempted to transpose its most pleasant design elements into Survival Ball.
The prototype was a literal extension of the initial wire chain and ball physics experiment, and features were added at the same time I learned Unity’s ropes. This, added to rapid prototyping and little to no code architecture planning, resulted in a codebase that was entangled, difficult to follow and hard to extend.
The new version was built from scratch and several arquitectures were explored in the beginning of the project, until one was found to fit the game best. The chosen approach followed these principles:
The level creation process went somewhere along these lines: I would go for a long walk without any audiobooks or entertainment, and allow my mind to wander around. If an idea came up, I would write it down in a Google Keep note. At a later time, in a quiet place, I would dissect these ideas on paper to materialize its pros and cons, while further developing the ideas that seemed most promising.
Before building any of the levels, a scrappy sandbox level was built to work out the broader strokes of the player’s movement. In this sandbox, movement, jump, stomp and dash (immediate single charge at the time) were first developed and loosely tweaked.
SpaceX was landing their first rockets on drone ships when the first idea of the bunch was developed. It revolved around the concept of a four thruster rocket platform. Each of these thrusters could be activated or deactivated by their respective button. An additional center button turned all the thrusters on or off. The platform had a finite amount of fuel that could be replineshed by shoving fuel crates, hazards or players in the center fuel intake.
This level would set the tone for the upcoming ones, so it was important to pin down the core gameplay, level structure and hazard dynamics before moving on to the next level. Just like changing a cosmological parameter would drastically change everything in the universe (like the possiblity of life), the game’s basilar parameters were set as tightly as possibile early on, because even a small change could break other already built levels. For example, a small change in how fast the player is, how high can it jump or how gravity affects it, could translate into several hours of extra work to re-adjust every level accordingly, or even render them obsolete.
For these reasons, it took around 354 hours to develop the bulk of RocketX, about one fifth of the project’s timespan.
Original concept for the second level was a variation of the Simon game, hence Garfunkel. A sequence of lights would be played out first, and the player(s) would then have to mimic the same sequence afterwards by orderly hitting the respective target pieces, at the cost of the piece falling off if it were to be hit outside the sequence. There would exist as many platforms as the number of players.
A quick level layout was built. Soon after the first plays and it became apparent that the concept was flawed. Mimicking a sequence would mean that during the first phase, while the model sequence is played out, the player would be left to do nothing movementwise. Moreover, after the second or third prompt of the sequence, the player would most likely forget which came next, because his/her main focus would be getting the ball movement right and not hit any of the non sequenced tiles. Simon is about memory, Survival Ball is about motor skill.
Pivoting from this, came the idea of collapsing the two phases into a single phase, no memorization needed. The platform piece prompts would be immediate and coupled with the background music rhythm, which would progressivly get faster as time passed. The player would have limited time to hit these prompts, as the unhit prompted pieces would fall after each music measure.
There was only one platform, since having multiple platforms would elicit a natural exploit. The player could quickly notice that the best strategy would be to focus all energy in a single platform. There was no real gain to be had on saving all platforms. One would suffice, and would be easier for the player to manage.
The background music was programmatically played. The easiest approach was to individually play each audio sample, meaning that the rhythm was dictated by the frequency to which these were played. Long and complex audio samples were not fit for this purpose, since they had their own tempo, and only transforming these samples would correctly match them with the overall music tempo. Simple and short audio samples were used, specially drum samples, which worked great for this use case.
The music engine and platform movement were first tackled, and took a considerable amount of time to build. After a few playthroughs with these basic systems in place, something did not feel quite right. At the time, it seemed related with how the background looked like. Predominantly using drum samples, the overall music style suggested a tribal feel, and seemed important at the time to have a good match between the music and the level’s visual appearance, hence the different backdrop experiments.
After these four different backdrops, something was still off. Progressively I realized that the real problem was related with the platform’s shape. In a circular platform, the best strategy is to safeguard the inner pieces, since they are easily reachable and have an inferior angular speed compared to the outer ones. Falling prey to sunk cost fallacy, some attempts were made to save this concept: rotate the inner pieces at a higher speed or to rotate them in different axes; to implement piece decay, which released the piece if the player stayed on top of it for too long. None of them worked, since these obstacles could be easily bypassed, as shown by the above video number 3.
The last backdrop experiment was a simple water floor, which focused the player’s attention in the platform and had the additional benefit of having a near plane to which the player’s shadow could be casted. The vertical player shadow was a very important reference for the player’s position.
The level would need to be completely redesigned or scrapped altogether, since it was not acceptable in its current form. It ended up being scrapped, but the waterfall backdrop (video 2. above) was later reused by Unfair Fair, and two new levels were forked from Garfunkel’s base elements: Beatmill and Big Giant Head.
Beatmill addressed one of the major issues in Garfunkel, the easy exploit of the circular platform shape. Solution was to have several loose square pieces which were part of a cross or lengthwise treadmill. Randomly moving each piece along these two directions meant that no piece was permanently in the same area (like the center area) of the platform. Every single piece was important to treasure and save.
The treadmill system was developed from scratch, and Garfunkel’s music engine and water floor backdrop were reused.
After developing the level’s basic game elements and validating their gameplay via play testing, the decay concept was recycled from Garfunkel, with slight modifications. A special black button was placed as one of the platform’s pieces. If pressed, it decayed the adjacent pieces. If a decayed piece was touched, it would fall.
This concept apparently worked well, but when play tested with a group of friends, it was pointed out as being too punishing. When the special black button was pressed, the level’s difficulty skyrocketed, making the already difficult task of reaching prompted pieces even harder. As a result, pressing the black button was faced as a death kneel, urging players to restart the level right after the black button was pressed. After a few rounds of these, collective despair would soon grow due to the overwhelming difficulty.
Solution was to invert the black button’s function. Instead of decaying the adjacent pieces, it would revive them. This time around, the special black button was not avoided in fear, but in greed. It was a precious resource, to be used as a late as possible.
Garfunkel’s platform did not work in a survival setting, but could it work in a different setting? In order for it to work, the level’s progress could only advance when the player touched his/her respective prompt. An objective sequence. The level also needed a long term objective for these sequences to link up. Turns out the game campaign was lacking a final boss, and this could be a good opportunity to salvage Garfunkel’s platform (and its respective movement and prompt system) to a new level, the final one. The final boss.
The defined long term objective was to deplete the boss’s life, but the players needed a way to interact with it. The first idea was to drain the boss’s life every time a prompted platform piece was hit, but that bore two problems: interaction was too indirect and there was no opposite force that gave the boss a chance to defend itself. Hazards could be used to fulfil this purpose, but that dynamic was too easy and insipid. The final boss was expected to be more challenging than all its previous levels, and was expected to require the player to use most, if not all, of the previously aquired learnings.
Inspired by classic game bosses, which stack different boss stages, solution was to materialize the boss into an anthropomorphic head, and have the head appear in the center, equipped with a turret, when it was sufficiently small to fit platform’s center gap. The head shrank every time a sequence of prompted platform pieces were hit. The players could directly interact with the center head by hitting it, draining the boss’s life. Difficulty progression was accomplished by decaying a set of platform pieces every time the small center head was defeated.
“Giant washing machine drum” was the initial concept behind this level. Challenge was to transform this germinal idea into a fun level, coherent with the game’s survival motto. Some scattered ideas were drafted on paper, such as having interactable controls to make the drum rotate in a given direction or to stop the rotation entirelly, but none of them were successfuly transformed in to a viable challenge or part of a larger challenge.
The most interesting avenue was to reuse the piece decay concept from Beatmill. In Unfair Fair, the decay was caused by a specific hazard, which if not hit by its respective player, exploded and decayed the nearby platform pieces. Once these pieces were touched, they fell. The pieces near the platform spokes were decayed at start, to increase the chances of partial or complete piece group detachment.
The first iteration of this hazard exploded after bouncing a given amount of times, which proved to be unfair for when the hazard bounces close to the ground, giving little opportunity for the player(s) to react.
The next and final iteration had the hazard explode a given time amount after its first bounce, which gave a fair chance for its dismantling. Visual presentation was reviewed to better present how close the hazard was to explode. The time taken to explode would decrease as the level progressed.
Notice the waterfall backdrop below, repurposed from one of Garfunkel’s experiments.
Upon Garfunkel’s closure, began the development of a concept revolving around an oil rig shaped platform. The platform had four pillars, each destroyed when their respective button was pressed. The player had no motivation to press them, but these could be pressed by stomper hazard developed previously for RocketX. Their only task was to press these buttons.
To add another challenge dimension, a concept from the game’s prototype was added: decay rockets (in the prototype they were represented as rectangular parallelepipeds). Upon landing, these hazards decayed the platform piece below until the piece was completely detached from the platform. Other hazards would be spawned throughout the lifecyle of the level for increased diversity.
The initial concept on paper was a 2.5D level in which a fluid (water/lava) would progressively rise as the player(s) made their way up a series of platforms and objects. The rising fluid concept seemed promising, but having a series of fixed 2.5D platforms seemed somewhat bland and left little space for cooperation dynamics.
Solution was to have an infinite amount of procedurally generated platforms, each higher than the previous one. The only way to reach the next platform was via an elevator assigned to a specific player, which first had to be activated by touching all of its respective prompts. The elevators would only rise when all players occupied their respective elevator.
Reb Blob Games’ guides on hex grids were immensly useful when building the procedurally generated honeycomb platforms. These guides couple well explained theory with concise practical implementations, and I highly recommend them to anyone doing hex grid work.
Upon the first play test session with friends, it became apparent that small tweaks were required to the behaviour of elevators. Because elevators would only rise when all players were sitting on top of their respective elevator, a problem arised if, near the upper platform, someone abandoned the elevator earlier than others. At that point, all elevators would drop, taking most of the players with them.
Solution was to make each elevator independent when they were close the upper platform, allowing for a smoother team transition between platforms.
Is is generally a good idea to write the introduction of a book, essay or paper last, since you need to have a very solid idea of the shape of the finished product, and exactly what you need to mention up front for everything to hold together. The tutorial level was left for last for the same reasons.
It was the game’s first level, and had the responsability to introduce the game and provide the required tools and knowledge to play it. Specifically, how to jump, double jump, dash and stomp. The overall concept of the level was inspired by an early version of Gang Beasts’ tutorial, which was pedagogic andp fun. Having the controls layed out as part of the scenario, instead of using an UI overlay fixture, was something quite interesting as well.
The first iteration had some of the required pedagogic elements, but was static and insipid when played, even though it had some interactive elements.
The following idea was to have a series of small platforms sitting by a pond. The instability of these floating platforms imprinted a livelier dynamic and were more physically sound than the initial tutorial iteration.
After a play test with friends, it was noticeable that very few people understood how to dash, or how important the combination of jump, dash and stomps were to gain air control.
Solution was to add an animated billboard ilustrating how the dash charge worked, and add two additional sections focused on air control. The following play tests showed that these changes made a significant difference on a player’s comprehension of the game’s basilar movements and techniques.
This level was not used, since no plausible game dynamics were found to fit the overall concept of the game, but the sheer fun and simplicity it provided while moving around high stacks of blocks were worth the special mention.
Most of these interfaces were built in the late stages of development. Following the Visual Aesthetics layed out earlier, the first rough screens were sketched out, starting by the home screen. Some experiments were made with the game’s logo and UI elements. UI buttons design for example, referred their colors and shape from a core element of the game, the hexagonal wave counter.
At about the same time, the title screen was built. Presented when opening the game and right before the home screen, it was crafted to create an impactful first impression and to be later used for key art, showcasing the most important elements of the game: logo, players, enemies and some levels. Another important aspect was to illustrate that this was a co-op game, so the 4 players were placed in the forefront, aligned in such a way that no one was widely in front of anyone, with about equal highlight to all of them. The screen drew inspiration from other games’ key art, such as the one from Super Mario Land 2, which presented all game’s key elements in one powerful, stylized image.
The home screen was later refined and the remaining screens were progressively built and polished. These were the options (video, audio and controls), player selection, pause, end game, game selection and stats screens.
Music and sound effects were built using GarageBand. Many freely available sound libraries such as the Sonniss GameAudioGDC Bundles were used to mash up different samples into new sound effects.
Audio aimed to be clean and coherent, thus a small subset of samples were used from the immense libraries available and reused whenever possible, trying to settle with a mostly electronic style for music and to use instruments or natural audio effects for SFX.
I believe coherency is gained when using a small set of musical keys throughout the game. Survival Ball only used the C major key and its relative minor, A minor. Major keys are genererally recognized as happier and joyful and minor keys as darker and heavier. In the game, each level or screen used one of these keys, according to the desired environment and feel.
One interesting nugget of knowledge I came to learn while developing the audio, was that sound effects also fit within a certain key. The overall audio will sound much better when sound effects and music are built in key accordance.
Another interesting tidbit was to remove the extremely low and high frequencies in the final mix. The effect is threefold: audio fatigue is reduced when editing; makes the overall composition seem cleaner to the listener; leaves more headroom in the audible frequency range, improving the composition’s focus and clarity.
The game ended up having more than 60 different sound effects and around 20 musical segments. A considerable amount of time was wasted checking on a given audio segment status, so a simple spreadsheet was used to track the name, category and its state (done, needs revision, missing). This simple sheet improved the audio creation process considerably.
Having the need to perform many iterations and tweaks without constant access to other testers, a simple AI was built to simulate multiplayer dynamics and to have a better grasp of how the level handles these dynamics.
AI agents were specific to each level, and driven by a series of objectives. For example, in Beatmill, the agent’s main objective was to reach its respective colored prompt. If no prompts existed, it would move towards the platform’s center, the optimal place to wait for a prompt. It also had passive behaviours such as avoidance of gaps or special black buttons via jumps. As seen in the video below, the agents are not perfect, but set a good work baseline.
Once roughly tweaked, the game was presented (preferably for the first time) to a group of friends during a play test session. These sessions were of uttermost value, since they provided valuable feedback, critics and/or validation to the various game elements.
There were 6 play test sessions in total. Apart from how valuable these were to improve the game, they were also a great opportunity to gather a group of friends and have a good time. The game was modified and tweaked between each session, after digesting the impressions and feedback of the previous one.
The game had some backdoor hooks which allowed for quick macro difficulty tweaks, e.g. how many waves were necessary to unlock a level. These were specially useful in the first play session, since the game was extremelly punishing in its early incarnations.
Some guidelines I tried to follow:
The game was launched on November 8th, 2018. One month earlier, the Steam store page, official website and twiter account were brought online. During that month, a closed beta ran on Steam, which served mostly for last mile validations, since the bulk of the iterations were made during the in-person play testing phase. At the same time, several dozens of keys were sent to youtubers, with notable regard to the ones specialized in couch co-op games. To the ones contacted by email, I took into attention this video by Stephen, where he describes what kind of emails he expects to receive from developers.
After launch, I was advised by fellow developers that the above strategy was not optimal, because it is usually a good idea to build your following by sharing your progress and interacting with the community during development (via social networks or dev blogs for example), potentially increasing the game’s exposure and sales. In a possible new game, it would be interesting to adopt this strategy and take note of how workflow and game design would be influenced.
Shortly after completing the first beta build of the game, I submitted my application to Indie X, the biggest indie game showcase/contest in Portugal. Fortunately, Survival Ball was accepted as one of the 55 finalists, meaning that it would be showcased at Lisboa Games Week 2018 a week after the game’s launch!
A custom build was specially crafted for the event. To better fit the event’s environment, the build offered an arcade experience through local leaderbords for the players with the highest number of completed waves, and leaderboards for the fastest players to finish Big Giant Head or the Tutorial. A simple controls cheat sheet was added to the pause menu, and the end game screens were changed to allow the players to enter their (group) name to the leaderboard. All levels were unlocked in this build, avoiding the need to finish the campaign to access a specific level.
The event surpassed all my expectations. It was the first time I saw, in person, swathes of anonymous people playing Survival Ball. It was pleasantly surprising to recurrently see many groups of gamers trying out the game, playing for hours, and even returning back to the booth at a later time. Not much sleep was had during those four days, but it was rewarding to witness such moments.
The showcase was also a great opportunity to connect with other developers and get to know their stories and games. The overall experience was incredible and I am immensly thankful to the organizers for putting it all together 🙏
Time was tracked during the entire span of the project, in various categories.
Mostly invisible, yet essential, camera work is key to any game with dynamic cameras. This article dissects a concise Unity open source library which dynamically keeps a set of objects (e.g. players and important objects) in view, a common problem for a wide range of games.
The library was developed for, and used by my first Steam game, Survival Ball. The game has an heavy shared screen local co-op component, which requires the camera to dynamically keep many key elements in view.
There are good camera resources for Unity, but I found them to either do too much or too little for this specific problem, so I thought this could be a good opportunity to learn a bit more about dynamic camera movement and to share that knowledge and code with the community.
The library is fed with the desired camera rotation (pitch, yaw and roll), the target objects that will be tracked and the camera that will be transformed.
The library’s sole responsibility is to calculate a camera position in which all targets lie inside its view. To achieve this, all target objects are projected onto a slice (plane) of the camera’s view frustrum. The projections located inside the view frustrum will be visible and the others will not. The idea is to trace back a new camera position from the outermost target projections, since this way we are guaranteed to include all projections inside the view.
In order to make the bulk of the operations easier to compute, the process starts by multiplying the camera’s inverse rotation with each of the targets positions, which will place them as they would if the camera’s axis would be aligned with the world’s axis (identity rotation). Once the camera position is calculated in this transformed space, the camera rotation is multiplied by this position, resulting in the final desired camera position. The actual camera position is then progressively interpolated towards this desired position, to smooth out the camera movement.
Most of the operations are performed in the transformed space where the camera’s axis would be aligned with the world’s axis (identity rotation). After the targets are rotated into the camera’s identity rotation space by multiplying the camera’s inverse rotation with each of the targets positions, the first task is to calculate their projections.
Please note that in all the figures below (with the exception of the horizontal field of view angle section), the camera is present for reference only, as its final desired position will only be uncovered in the final step.
For each target, four projections are cast to a plane parallel to the view plane, sliced from the camera’s view frustrum. The line described from the target object to its respective projection is parallel to the camera’s view frustrum edges. Relative to the camera, two of these projections run horizontally, and the other two vertically.
If any of the target’s projections are outside the camera’s view frustrum (or its sliced plane), then the target object will not be visible. If they are inside, the target object will be visible. This means that the four outermost projections from all targets will define the limit of where the view frustrum must be in order to have all objects in view or partially in view. Adding some padding to these outermost projections (i.e. moving these projections away from the center of the view frustrum plane slice), will result in additional padding between the target object and the camera’s view borders.
For all vertical projections positions, we are interested in finding their Y component. In the figure below, notice the right triangle with one vertex on the target object and another one on the projection. If we discover the length of the side running parallel the projection plane, that value can be added to the Y component of the target’s position, resulting in the Y component for the upper projection.
is equal to half the camera’s vertical field of view angle (). The vertical field of view angle is provided by the camera’s fieldOfView
in degrees, which needs to be converted to radians for our purposes ().
The triangle’s adjacent edge length (relative to ) is known, thus we can find the length of the opposite side of the triangle by using trigonometric ratios.
With this, the upper projection’s Y/Z components can be fully calculated. The bottom projection has the same Z component as the upper one, and its Y component is equal to the target’s Y component minus the calculated opposite triangle edge length.
The horizontal projections follow a set of similar set of calculations, difference being that we are now interested in finding the X component (instead of Y), and the horizontal field of view angle is used instead of the vertical one. The horizontal field of view angle and its half value () need some further steps to be computed, which will be detailed in the following section.
Consider the following figure, in which represents half the horizontal field of view angle, represents half the vertical field of view angle, the viewport width and the viewport height:
Using trigonometric ratios, these two equations can be devised:
Replacing in the first equation with the definition of the second one:
Unity’s camera has an aspect
attribute (view canvas width divided by height, i.e. ), with which we can finalize our equation and discover the horizontal field of view angle half value.
Having all target projections calculated, the four outermost ones are picked:
The X and Y components of the desired camera position in the transformed space are the midpoints of their respective outermost projections, this is, the midpoint between and is the camera’s X position, and the midpoint between and is the camera’s Y position.
The Z component of the camera position in the transformed space is calculated by backtracking a view frustrum from the the outermost projections to the camera Z component candidate. The furthest Z component from the projection plane will be the chosen, in order for the final camera position to contain all targets within its view.
Once again, trigonometric ratios will be used to calculate these Z position candidates.
The highest value between and will be picked for the camera’s Z position component in the transformed space.
With the camera position calculated in the transformed space, we can now multiply the desired camera rotation with this position, which will provide us with the final desired camera position. The actual camera position is then progressively interpolated towards this desired position, to smooth out the camera movement.
The library is available on GitHub and the Unity Asset Store. An example scene of the library’s usage is included. Feedback is most welcome and I hope this can be useful!
]]>