I wrote a book on Open Source Software Licensing, but why?


One part of my role, over the last 15 years, has been to police the flurry of software components developers want to leverage. In response to that I developed internal systems, processes and gained experience of interpreting licence agreements. I’ve always been trying to keep on top of such things because I didn’t want to be distracted with any repercussions. I just want to get back to code.

During a recent customer deal, my experience expanded when one client insisted that as part of a software escrow deposit, any hint of open source contamination could trigger code exposure to the customer for investigative purposes. This swiftly followed by a highly elevated level of scrutiny over our codebase; leading us to outsource independent verification of our findings. We wanted to have confidence that we have taken all reasonable and diligent steps to protect ourselves and our assets. Being competent, confident and responsive to 3rd Party component usage information requests have also made further deals and engagements progress more smoothly for all involved.

Over the last year, I bought a number of books on Open Source Software Licensing to solidify my knowledge. It quickly dawned that not many provide pragmatic advice on how to start taking control of the problem, or how to manage it day-to-day. They are often filled with more information than (I) needed.

When I work, every minute is precious – I hate being inefficient or ineffective, and I’m sure others do too. So I decided to write a book that was going to get straight to the point – helping people like me who really just want to get back to coding as soon as possible.

I thought it was going to be easy – I was wrong.

At first I wanted to focus on the pure pragmatic advice, and forgo dull introductions, build up, terminology. But it soon became apparent that I needed to build some foundational knowledge in the reader before laying out my experiences. This started to pollute my vision but felt a necessary evil; and I suddenly understood why many books do this. Everyone needs to be on the same page (pun by design). I also wanted a quick-fire Q&A section, but realized that it would mainly be a copy & paste of the content in the book, as it was already fairly minimal.

I was writing during the evenings and most weekends, and I soon realized that writing on such a topic seemed hard going. Every time I wrote a sentence I doubted what I was saying was fact, and  believed it was just my crazy opinion; so I then had to research and find corroborating support from books or online articles. This obviously slows down the process while adding legitimacy, and protecting me from embarrassment. Further slowdowns were caused by writer’s block, or as I like to call it word blindness and imposter syndrome.

During the process I discovered many articles and useful web sites I just had to try and include in the book. Unfortunately that broke the planned structure. In the end I rewrote it about fifty five billion times – give or take one. Rewrites and reorganizations take time, and you also then get confused with what you’ve reviewed and if you’ve already talked about a topic. Rewrites also introduce more grammatical errors, something I’m already terrible at.

Eventually I just decided to stop worrying about it, dump all the content into the book; then go through linearly (many times, in both directions) to make sure I hadn’t stated the same thing twice (or at least not verbosely). Then streamline some more, making sure the pragmatic ethos is applied. After drawing some pretty diagrams and formatting headings, adding emphasis on the correct words and getting page breaks right, I felt ready to publish. The PDF was ready.

Little did I know that it would take another month to publish. But kind words from those I gave pre-release copies to really helped push me on. @MrRio and Laura Turner to mention a few.

If only I’d have googled ‘how to write for a Kindle device’, things would have been a lot smoother. No one set of formatting seemed to work well on iPhone/iPad/Fire reader.  After many nights of pain, I had to drop Apple Pages, and use Microsoft Word. All the beautiful formatting and indentations were lost, artwork had to be redone. The Amazon cover page creator is seriously rubbish too, I started to lose the will to live.

It almost got to the point that I was going to publish regardless of how scraggly it looked, and never read it again. After telling my friend Amelia, a professional copy editor, about my Kindle woes – she just constantly nodded; as if she’s heard it all before.

I’ve sworn not to write a book ever again, the process has been so traumatic. Not something I enjoyed, but I don’t regret doing it. After-all, It’s kinda cool to say I’m an internationally published author!

As for sales, I’m not going to retire anytime soon. At the time of writing I’ve had one purchase in Japan, one in North America and 70 normalized pages have been read through the Kindle Subscription services. I knew this was going to be a niche market; so no surprise there!

I do have some plans to neatening the book up a little; for a start the bibliography and references section needs help – I was largely at the mercy of Microsoft Word->ePub conversion, and I gave up the fight.

I could also do with adding recent events regarding the Oracle Java API case, and elaborate more on some of the popular licences in play and their associated risks/permissiveness.

I’m dreading the first reviews – it’s been a rough experience and reviews can be quite brutal. I just hope the audience appreciates I’ve tried to respect their time by keeping it minimal and pragmatic – Given Kindle pays per page read, I’m not surprised there’s some oversized books on the shelf!

Here’s the book in all it’s glory, it’s fun to see what it looks like in Japan’s store..

Amazon US

Amazon UK

Amazon JP

 

 

 

 

 

 

 

 

 

GameDev Diary – SpaceLife #7


It was all going so well.

I placed some pre-canned planets procedurally, as well as making planets farthest from the sun be gaseous. Then I decided to be a fool and turn on lighting. Producing some lovely results as you can see below:

 

Yay! That looks cool.

But then I discovered something quite awkward.

The sun is in a mini, scaled down solar system 30,000 units away. So that’s where the directional lighting is.

The ship and asteroids are NOT in the same place. So we’re actually getting the right shadows by luck rather than by judgement. If I warped behind the sun, we’d get the shadows on the wrong side because it’s all been faked. We will also find that shadows of planets will only fall on other planets and not on the ship or asteroids, because the light is coming from a different direction and length (we won’t get the correct cone of shadow).

I’m not 100% sure what to do about this. I need to prevent light bleed from both locations, yet represent the light in both locations.

If I create a fake light in my ‘local space’ and curtail the light effects of the mini solar system so they do not bleed into us, then we still won’t get shadows falling from planets.  That will manifest itself when we start to add space-stations. If a space-station is near a planet, and the sun is behind the planet, then the space station’s rear should be dark too. But because I’ll be faking a local light that’s not got a planet in the way – that won’t be the case.

So I either work out something clever, or I keep this simple-stupid.

Time to read up on lighting I think! http://www.edy.es/dev/docs/unity-5-lighting-cookbook/

Eureka-ish

Using a point light in the scaled-down universe allows me to get an area-of-effect light but curtail it’s distance. That’s the solar system lit up nicely! You can see here that a planet further away is bright as day but the back side of the planet near us is dark.

2016-06-04 13_53_03-Start.png

However, if I were to add some asteroids or change the camera to see the cockpit, it won’t be lit up yet. Time to add another spotlight that lives in our near universe – mimicking the position of the sun.

This is achieved by a spotlight 5000m away in the +ve Z axis, pointing at us. Here’s a picture of how it all pans out.

2016_06_04_14_21_44_Start.png

Now the trick will be to keep the sun in the correct place; remember – the floating origin trick? Well, if we do nothing, and keep our local sun as a root game object, it will work. But you’ll see the intensity of the light get higher and higher until our player reaches the threshold for the floating origin ‘push back’. Suddenly the lighting conditions will appear to flip to a darker scene. So we have to keep the local sun at the same distance away from us, and if it’s starting position is different (maybe in future, we’ll warp to a planet the other side of the sun) it will keep in the same elevation/position relative to the mini universe specifications.

What have we lost? Shadows from planets. But in reality, they player probably won’t even notice.

After adding some ambient light, our scene looks nice, like the first images of ‘fake success’.

2016-06-04 14_34_30-Unity Personal (64bit) - SkyboxTestScene.unity - SpaceLife - WebGL_ _DX11_.png

2016-06-04 14_38_50-Start.png

Next I think I need to tidy up the current code, start thinking about how to warp near planets to get to their space stations – if they have them. One day I’ll get rid of the lame warp-up/down etc. That’s just to play around with.

 

GameDev Diary – SpaceLife #6


Generate this!

It’s easy to get lost when you’re a solo developer. Now and again it’s a good idea come up for air and try and think about what your goal is, or even your critical path, and ask your self questions like “Is this a game I would play?”.

I’d like to say my goal was to make the best space game out there, but that’s not realistic. Whilst I was looking into procedural generation of solar systems and planets I came across an intersesting debate. There were voices declaring that procedural generation of 18 billion stars in NoMansSky didn’t sound like fun, that the variation between systems would develop patterns and not be engaging enough. The fear is that computer generated content can be boring by the virtue that it will inevitably have constraints on the parameters involved, creativity and purpoe will be limited. We humans are incredibly good at feeling repetition, even if we can’t immediately see it. What’s the point in travelling across a few thousand planets if you feel that they all are roughly the same. Maybe that’s the inevitability of the universe. It probably is like that, but even more boring. Let’s not forget how excited scientists are to sample some dust or have the mere hint of water on Mars.

Players often want some kind of journey full of delight and surprises, others are quite happy being a nomad and living away from civilization to build their own game in their heads – but surely enough real content needs to exist for this to be more than a fleeting engagement.

Of course, procedural generation takes a huge load off a development team; not having to create every minute scrap of content, so it’s perfect for a solo developer (as long as it’s kept simple). In my search for the balance of fun and procedural generation I came across this great blog post by  Kate Compton. I hope you appreciate the link – it took me an hour to re-find it from yesterday!

What am I making (the generator)?

  • A star system generator, which can layout space stations, jump gates, hidden treasures, asteroid belts, resources, planets, nebulas, you name it. Ok – maybe that’s too much right now – let’s focus on just planets and nebulas.

What’s good/important about my star systems?

  • If system is a solar system, it should contain a sun/star (solar gives it away?).
  • Planets should move around elliptical orbits  centred around the biggest mass. Those orbits do not have to be on the same plane, although most probably will.
  • There may be one or two nebulas in the system with size constraints.
  • Some planets have moons, which should also rotate.
  • The background skybox
  • The same solar system must exist and behave exactly as it did if we visit again (unless we add some catastrophic event).

What must not happen?

  • Planets must not be too close to each other
  • Objects must be limited in size.
  • Planets shouldn’t all be the same size or texture.
  • Moons should be smaller than the planets they circle, and must not collide.

Ok, that’s still a lot to think about – so the first mission is just to get the central star procedurally generated, and the skybox.

Universe coordinates

To start procedural generation, we need to have a deterministic algorithm based on a fixed set of values. We always want the same output for a specific input. This prevents us having to store lots of detail, configure scenes, test, you name it.

I imagine my starting star system is a box at coordinate 0,0,0.

Imagine there’s other boxes above (0,1,0), below (0,-1,0), left (-1,0,0), right (1,0,0), forward (0,0,1) and back (0,0,-1). This can be the key for all the detail we need in the universe. Of course, this doesn’t mean we can only move in 6 directions, there’s nothing stopping diagonal movement in the game. I may decide that some kind of routes are available through the universe, based on the availability of jump gates,  and that we do not expose all of these boxes to the user. But i’ll leave that alone for now – see how easy it is to get lost in your own thoughts.

If I wanted to make the player’s universe more sparse at the edges or center, I could use those x,y,z values to influence the cap on numbers of planets, or other features.

e.g. if x,y,z fall in the range of 5 to -5, we are in ‘high security space’, where bandits do not appear, and as we leave that zone the percecntage chance increases. If we have a huge universe, then we might have a more complex algorithm that used sine waves, or use simplex noise to produce interesting waves of security, which then inform what type of ships, alliances, predators are present.

But for now, I’ll just use those universe coordinates to populate some data that I’ll use as a starting point for my skybox and solar system.

Here’s the code to generate a deterministic set of numbers from vector coordinates. This gives us 20 bytes (values 0..255). We can isolate individual bit patterns for simple on/off decisions, or use the modulus operator to select an item out of the array. If we wanted to we could combine four bytes to make a large 32 bit integer.

ProceduralGeneration101

This algorithm only gives us 20 bytes, but what if we need more (we will). Simply add on some text to the base string and generate another hash. e.g.

planetData = baseData + “Planet”;

Then call ComputeHash() using planetData.

I think that’s enough blogging for now – I’m going to get my head down and bash out the nebula, orbits and planets and see what problems arise. Till next time.

 

 

 

 

 

 

 

 

 

 

 

 

GameDev Diary – SpaceLife #5


Having failed miserably at getting close to planets, or having huge planets without introducing flicker through Z buffer issues, and seeing the efforts involved  in the Kerbal Space Programme (KSP), I think that for now my time is best spent elsewhere. For now my scaled down 3d skybox can still yield fantastic results, and for now the distant camera will not move. I may bring back camera movement in the 3d skybox at a later date, but I don’t want to define playable game areas nor have ugly turn around! warnings.

From memory, the game EVE (when I played it) allowed the player travel unconstrained around a system, but any planets or nebulas remain static. They can warp between jump gates, and to stations, and I don’t think planets got closer. If they did, it wasn’t massively so. You could also probe parts of the system and warp to them, still with the same scenery and it didn’t feel wrong. If I want the user to go to planets, I’ll use space stations or specific warp sequences to achieve it, and maybe like StarWars galaxies, have constrained game areas.

Regardless of whether I move or scale planets, I still need to follow the KSP floating origin mechanism and move any other game objects including my skybox allowing physics to operate cleanly by keeping my player at 0,0,0.

Well, that was pretty easy. Although if I remember, the floating origin scripts I’ve seen also do something funky with particle effects.

 

floatingorigin

I’ve had time to add one of the standard unity flare assets to make the sun look sparkly too. Although because the light is in the distant scene, it’s not illuminating my ship – a problem for another day!

Sunshine.png

GameDev Diary – SpaceLife #5


Planet Hacking – scaling objects based on distance

Continuing my experiments into creating large planets, I came across this post where people are simulating large scales by resizing objects based on the distance away from it. http://answers.unity3d.com/questions/137097/scale-objects-based-on-distance.html. I couldn’t get the code in this post to work because I’m still moving the camera and not the world – But I created my own implementation based on the theory, using the downscaled 3d skybox containing the distantCamera and distantSun.

Scale based on distance.png

If you haven’t noticed already, there’s a glaring error with my maths –

  • as you get closer to the object the scale isn’t linear. At a starting distance of 40, if we are 20 units from the planet, it’s scale is multiplied by 40/20 =2, but if we only go 10 closer, we are 40/10 = 4. Are we really twice as close? If we are 2 meters away, the planet is 20 times bigger, and one meter away we are 40 times bigger. That’s not right!
  • Even if the maths were right, we are doubling up distance movements – by scaling AND moving objects. The edge of the object has moved towards or away from us in addition to the movement.

If we want to move around planets, we have to be able to move the camera (or world) around us, so I have to ask the question to myself, why am I concerned with scaling objects by distance? What problem am I solving?

The problem lies that within the mini-skybox, all planets have been downscaled, and when we move around, we need them to be scaled up at runtime. This has two advantages – one we can arrange our solar system with ease (you can’t easily move items that are huge, or even fit them into the scene), and we give the perception that moving 1/100000 unit feels like moving 1 unit. So, back to the drawing board on the maths – trying to take into account a linear progression but also the scale of the camera movement…

I’ve worked out what was wrong with the maths, but there’s another problem – Even if I solve the scale  issue – Unity just can’t handle it. If I want really large planets, drawing really large objects isn’t the way forward. If I have a scale of 1.6m and a distance of 800,400 and a camera far clipping plane of 1000, I can see the planet, but it flickers really badly. Obviously I’m not the first person to try and tackle this problem, and it looks solvable if you know what you’re doing. I currently do not!

The most interesting posts I’ve found so far on this subject are here:

http://www.gamasutra.com/view/feature/131393/a_realtime_procedural_universe_.php?print=1

http://www.davenewson.com/dev/unity-notes-on-rendering-the-big-and-the-small

And this awesome video

Either this will inspire or destroy me!

 

 

Gamedev Diary – SpaceLife #4


Planet hacking – multiple distant cameras

Ok – attempt 1 – multiple cameras that allow distant objects to be rendered. To overcome Z buffer accuracy…

  • I added more cameras as children of the main camera.
  • I added a layer called ‘distant’ – this allows only distant objects to appear in this layer, making the culling mask ‘distant’ (not shown below).
  • I changed the depth order so that the cameras would render farthest first
  • I changed the most distant camera to contain the skybox
  • I altered the near and far clipping planes.
  • I received an ugly error flooding my console (didn’t stop the program though) when exceeding the far clipping plane value 1e+07

DistantCameras.png

I made the sun super huge:

BigSun

I’m not particularly a fan of errors and warnings in my programs.  This approach has worked – but it’s not entirely satisfactory. I also wonder how many cameras should I add, where does it stop?

Thoughts so far on this approach:

  1. Unity soon complained about the values I entered (just warnings).
  2. When I  approach the sun, it looked massive, yay, but very rough meshes.
  3. When approaching them, my ship coordinates are far from 0,0,0 – which allegedly will cause madness with the physics calculations (if I were to hit an asteroid etc).
  4. Any layered effects you put on the object tend to be very fiddly and don’t show up (e.g. scaled down transparent spheres/particle effects), and they might look ridiculous at close range – I’ll have to do some more playing around to see if those can be worked around. Multiple LODs? Maybe switch meshes/objects out as they transition to closer cameras?

I’m going to shelve this idea for now and play with another….

Planet Hacking – The scaled down universe

For this experiment I just moved my planet and sun to a point way out of the scene, and re-parented them under a new game object. I then added a camera with a normal viewing distance, and placed solar bodies apart scaled down to 1/100,000,000th (one millionth) of their sizes.

miniuniverse

The final trick is to add a LateUpdate() function onto my camera and tie in the distant universe camera as a public property of my main camera:

scalecam

This worked great!

miniuniverse2.png

You can see my spaceship moved a huge distance (11 million units) and my camera in the mini universe only moved 11 units. However, as you can see – I’m now getting warnings that my spaceship is too far away. Another issue presents itself – when I get close to the sun, it doesn’t appear big any more. It is only 20 units wide – and it looks 20 units wide close up.  I’ve scaled the distance, but not the size.  Should I scale the size whilst moving? it all seems perverse. Maybe have bounding colliders that allow me to reposition the sun further away but at a bigger scale?

Back to the drawing board!

I think it’s time to find out how others achieve this – rolling my own is starting to get frustrating!

Its fun trying though.

 

 

Gamedev Diary – SpaceLife #3


In thrust we trust

I think I’ve got a solution to the thrusting issues (ooer). 100% thrust looks nice, and at 100% speed you get the same rewarding jet so you know you’re travelling fast. Yes – I know, in real space you would keep accelerating if that were the case.   And when you’re going 50% of your max speed, you get a half-jet etc. The trouble is when you’re trying to recover from a sudden 180 flip or harsh pitching, you don’t get any indication of extra effort being put in to correct your course. Now you do! There’s  an extra flare of thrust for ‘inertial dampening/correction’. However, these wacky calculations caused some issues when trying to manually increase or decrease the thrust, the effect was that the jet immediately altered which looked poor. So I introduced another calculation just for when the user manually alters the thrust – lerping from the current size/length values to the target values over time.

See the comparison below of 100% thrust and 100% with recovery thrust:

 

I’ve got a bit of code tidying to do but other than that I’m happy for now – there’s always room for improvement, but things like planets and space-stations, mining ships and the usual space-fare are calling me. And yes – a nice dashboard/cockpit wouldn’t go amiss.

Size is important

Ok, so now I’m trying to put planets in the scene. You may have already seen a screenshot where I placed a big yellow sun and smaller planet in the scene. It was all fake – you could run up to that planet in no time, and it’d be the size of a rock. That’s not what we want. Unfortunately Unity (and many games) have some issues when trying to represent far away objects.

  1. If you try and encompass far away objects in your main camera, you can end up drawing a lot of objects you might never be able to see (slow)
  2. A large viewing area (near and far clipping plane) leads to errors in the drawing pipeline because the Z (depth) buffer can’t calculate the correct order easily.
  3. As distances move far away from (0,0,0) physics calculations can go screwey.
  4. The accuracy of objects can only be represented by up to 7 significant figures. Thus you can model small items successfully, but really large items start to loose a few meters, kilometres or more depending on how big they are. (Let’s not forget the earth is 149,000,000,000 meters away from the sun, and the sun is 1,392,000,000 meters wide. So I may just get away with it – as I don’t really need physics in play for those planets, but I might want to get near them, which puts me way off the (0,0,0) origin.

Obviously Suns can be seen from very far away. If we extended our camera’s range, it would have to be huge. Not really feasible. Time to look for solutions.

Looking around the forums I discovered a few tricks which I’m going to try out:

  1. Combining cameas: Set up your main camera to render 0.3 to 1000, and then two more cameras which render 1000 to 10,000,000 and 10,000,000 to 100,000,000,000. I can see how this solves the Z buffer issue, but might not solve drawing too many objects without a bit of help.
  2. Combining cameras: Keep your main camera as is, but have a scaled down solar system somewhere out of sight from your main scene area that contained your distant objects, vastly reduced down in scale. You would then use the LateUpdate() of your main camera to keep rotation in check, but movement would be scaled by your ‘mini galaxy scale factor’. Thus you’d still edge towards these planets but very slowly. You would also configure the mini-galaxy to have the skybox, and to draw first by configuring it’s depth setting to less than that of the main camera. I’m also not sure how sheer size will factor into this unless we can narrow the camera frustrum to make items look massive?
  3. Additionally, instead of moving your player around a universe, where physics might get dodgy as you move away from the center, move your universe around the player keeping them at 0,0,0 (sounds hard).
  4. I had another thought which was to ‘occasionally’ do step 3, e.g. when my ship hits +-1000 in any axis, move all root game objects + or – 1000, and the player. Effectively zoning the map such that we never stray too far and physics can work effectively. However, I imagine that any code which is attempting to reach a specific world-coordinate will have some amusing ‘hyperspeed’ experiences. Best not to prematurely optimize.

So, my next step is to try some of these methods out and report back. I could take the easy route out and just deny allowing players to get within close proximity to planets and insist on using space-stations to Segway between space and ground. If it was good enough for Starwars Galaxies, it’s good enough for me. But ideally I’d love for players to go seamlessly from space to ground, if their ship was suitable. #lifegoal