Positioning Sprites with Rectangles and Vectors in XNA

Prada

Staying with the XNA techie theme, now a word about performing sprite positioning in a game. A sprite is something that has a texture (the image to be displayed when the sprite is drawn) and a position (the place to draw it). Generally sprites map onto objects in a game. If I am creating a game and I need to draw a bat, ball, spaceship, alien or asteroid I will use a sprite to do this.

There are two different ways to position sprites in XNA and each has its advantages and disadvantages. However, it is worth knowing about both.

Rectangle Positioning

With this form you create a rectangle which tells XNA where to draw the item, and how big it is:

Rectangle r = new Rectangle(0, 0, 200, 100);
spriteBatch.Draw(texture, r, Color.White);

This draws our texture in the top left hand corner (0,0) , in a rectangle 200 pixels wide and 100 pixels high.

Using a rectangle like this works well, it also gives you a ready made “bounding box” which you can use to test for collisions:

if (rect1.Intersects(rect2)) {
    // We have a collision between rect1 and rect2
}

However, there are some bad things about using rectangles that make me less keen on them

  • Rectangles are positioned using integers. The x and y properties you use describe where they are do not have a fractional part. This means that if you want to move a rectangle slowly around the screen (i.e. less than a pixel per update) you can’t just use the x and y properties to do this.
  • You can use rectangles to scale drawing, but this gets a bit tedious as you have to work out the size of rectangle you need, and you also need to be mindful of the aspect ratio of the item you are drawing so that it doesn’t end up squashed or stretched.
  • It is impossible to rotate a sprite which is positioned using a rectangle

So, rectangles are good for very simple sprites, but once you have become more familiar with XNA I think it is worth moving on to Vector positioning.

Vector Positioning

A vector is something that has direction and magnitude. That sounds posh. Actually it just contains an X and Y value, just like a coordinate. The “direction and magnitude” bit kicks in if you draw a line from the origin (0,0) to the X, Y position given. This line is the vector. The direction is the way the line points, and the magnitude is how long the line is.  In XNA terms the Vector2 type is the one you can use to position 2D sprites:

Vector2 v = new Vector2(10, 10);
spriteBatch.Draw(texture, v, Color.White);

This code draws the texture at position (10,10) . The texture is drawn in whatever size it happens to be, i.e. if the texture image was 200 pixels by 100 it would be drawn that size.  This means that you might need to scale your textures to fit a particular screen size – but as we shall see later this is not  a huge problem.

There are quite a few good things about vectors though.

  • The X and Y values of a vector are floating point, so you have very good control of sprite speed
  • The XNA framework supports vector mathematics directly. So you can write code like this:

    position = position + speed;

    - where position and speed are both vectors

When it comes to scaling and rotating a sprite positioned using a vector you can use a more complex version of the draw command (find a detailed description here) to do all this. Also, bear in mind that if you targeting Windows Phone you can fix your resolution in the game to a particular value and then make all your to assets fit.

graphics.PreferredBackBufferWidth = 480;
graphics.PreferredBackBufferHeight = 800;

The phone hardware will scale the display automatically to match whatever size you specify, now and in the future.

If you want to detect bounding box collisions you can write code like this:

public bool Contains(Vector2 pos)
{
    if (pos.X < position.X) return false;
    if (pos.X > (position.X + texture.Width)) return false;
    if (pos.Y < position.Y) return false;
    if (pos.Y > (position.Y + texture.Height)) return false;
    return true;
}

This returns true if the area covered by the sprite contains the given position.

For bounding box collisions you can use this:

public bool Intersects(Sprite c)
{
    if (pos.X + texture.Width < c.pos.X) return false;
    if (c.pos.X + c.texture.Width < pos.X) return false;
    if (pos.Y + texture.Height < c.pos.Y) return false;
    if (c.pos.Y + c.texture.Height < pos.Y) return false;
    return true;
}

This test will tell you if the bounding box around the two sprites intersects. However, if the sprites are textures that don’t fill the entire sprite rectangle this test is not a very accurate one. I’ll be covering pixel level collision detection next.

Game Object State Management in XNA

Vegas Building

Hmm. This sounds a bit deep for a Saturday blog post. I suppose it is, but I got asked a question at Mix 11 and I’ve been pondering it ever since then. The question was about some code like this:

class Cloud : CloudGame.ISprite
{
    public Texture2D CloudTexture;
    public Vector2 CloudPosition;
    public Vector2 CloudSpeed;
    public bool Burst = false;

    public void Draw(CloudGame game)
    {
        if (!Burst)
            game.spriteBatch.Draw(CloudTexture, 
                                                                       CloudPosition, 
                                                                       Color.White);
    }
   // rest of cloud code here
}

This is a tiny part of my “Cloud Bursting” game demo, where players touch clouds to burst them. The above code is the definition of the cloud itself. This has a texture, position, speed of movement and a Burst flag as properties.

Burst is how the cloud “knows” if the cloud has been burst. If the cloud is burst it is not drawn. At the start of the game all the clouds have the Burst flag set to false to indicate that they are still part of the game. As the player touches them the Burst flag is set true.

You see this in a lot of game play situations. Anywhere that things are “killed” they have to be able to remember whether they are still alive or not. The code above is simple and works well, at any instant a cloud is either burst or not. If it is burst (i.e. the Burst flag is true) plays no further part in the game and is not drawn.

However, this design gave rise to the question: “Why do we need to have a flag? Surely it would be more efficient to remove a ‘burst’ cloud from the list of things to be drawn?”

This is a very good question. To understand it you need to remember that in a game we will have many objects on the screen at any given time. To manage this the game contains a collection of some kind (usually a List). During Update and Draw the game works through the list acting on each item in it. If an item is removed from the list it will no longer be part of the game.  When the player bursts a cloud, rather than setting a flag you could just remove that cloud from the list of active game objects.

This approach has the advantage that it is more efficient. Instead of a cloud having to decide not to draw itself the game never tries to draw burst clouds at all.  However, it also adds complication. The game must now have a “master” list of clouds and an “active” list of clouds. At the start of a game the master cloud references are copied into the active list. This is to avoid the overheads of creating and destroying objects, something you really don’t want to be doing on a regular basis.

Furthermore, the time that is saved is probably not going to be that much use to us. If the game works fine with all the clouds visible (which it should do) then saving small amounts of processor time when some clouds are removed is not going to make that much difference to performance. In this situation it is the drawing of the objects that takes the time, not deciding whether or not to draw them.

The fundamental principle here is that you should go for simplicity first, then optimise when you discover you have a performance issue.  I like having the flags present, it makes for greater cohesion and easier debugging. I can look at any cloud and decide whether or not it should be drawn simply by inspecting the flag.  If the flag wasn’t there I’d have to check which lists held the cloud, and so on.

So, I go with internal flags, not lists, which is pretty much what I said at the time as I remember.

C# Yield Return Fun

public static IEnumerable YieldFun()
{
    yield return 1;
    yield return 2;
    yield return 3;
}

static void Main(string[] args)
{
    foreach (int i in YieldFun())
    {
        Console.WriteLine(i);
    }
}

If you can tell me what the code above does, you understand how yield return works. If you can’t, read on……

In a C# program you can mark things as implementing the IEnumerable interface (to use this you need to have a using System.Collections; at the top of your program). This means the thing can be enumerated,  i.e. it means that I can get a succession of objects from it.

The best way to work through something that can be enumerated is by using the foreach construction, as shown above. You’ve probably seen foreach when you’ve worked through items in a collection such as a List or an array.  In the above code we are using foreach to work through the enumerable results that the YieldFun method returns. 

The code inside YieldFun looks a bit scary. The yield return keyword combination is followed by the thing that you want to send back for this iteration. In this case I’m just returning a different number each time. What happens when the yield return is reached is that the method stops at that point, returns the result and then, when the foreach asks for the next value, the method wakes up and continues at the next statement. The result of the program is simply the sequence:

1
2
3

If you want to programmatically supply a sequence of items to a consumer then this is  a great way to do it. 

Talking the Talk and Walking the Walk

Lift Mottor.jpg

We had a couple of presentations in the department today. Team Yellow and Team Purple (Tentacle?) gave the initial presentations for their group projects.  To say that the teams had been working together for  a week or so and it was their first stand up together they did very well.

One thing that did stand out though was some of the phrases that were used and this brought home to me how you need to be careful how you talk in front of an audience, particularly if you want to convince them you know what you are doing.

For example take the phrase “User Friendly”. It is all very well to say “We are going to produce a user-friendly solution”. You want to convey that you think this aspect of a system is important. However, saying it like this is pretty much meaningless. The customer is not expecting you to produce something that is “user-hostile”, but the phrase could also be expressed as “We’re not going to make something that acts as if it hates you”. 

It is far better to say what you are actually going to do to solve the problem. “We are going to closely involve the end user in the design and implementation so that they find the system easy to use.” is a much better way to express your intentions.  Take a similar approach when you talk about security. Rather than saying you think something is important you must say what you are going to do about it.

The other thing that came out from the presentations was partly my fault. I’d said earlier that it is very important to make the customer aware of those aspects of the system that you are not going to implement. For example, you might be expecting the customer to back up the data rather than providing data backup as part of your solution. You need get this over, but I’m not sure you should have have a slide with the heading “Things we are not going to do”.  It is far better to say things like “The server infrastructure that you are using will be used to back up our data along with that from other systems”. This puts the responsibility in the right place without sounding like you are avoiding work.

If all this sounds a bit like the dread “marketing speak” then I’m very sorry about that, but I do feel that it is important that you make sure that things you say are backed up with a some kind of action plan and you should avoid sounding negative about your intentions.

Getting Students Started in the Windows Phone Marketplace

Qwest Field

Getting started as a student in Windows Phone marketplace is actually quite easy, but there are one or two issues that you need to be aware of, and best practices to follow to make sure that you get going as quickly as possible. If you know how to do stuff the two issues that you need to be aware of are very simple:

  • The validation of your account only starts once you have submitted an application for approval.
  • You can only unlock a Windows Phone device once you have submitted an application for approval.

The bottom line here is that the first thing you must do when you have registered is submit an application for approval. Think of this as a “placeholder” that will move you through the process. You can remove it from sale later.

If you are an experienced Windows Phone developer this should be no problem. If you are not submitted before the process is simple enough, and to make it even easier I’ve made a tiny screencast that goes through it for you. In this I make a brand new application from scratch and then show how it would be submitted for approval. If you just copy what I do you can be sorted in around half an hour or so.

You can download and view the video here:

Windows Phone Marketplace Walkthrough

Note: The application that I submit during the screencast hasn’t appeared in the Marketplace yet. I’ll let you know when it does…

Reference and Value Types

IC 2007 Fri Trip f10 124

 

I reckon that the day I give a lecture and don’t learn anything is the day that I will give up teaching. I always take something away from a lecture, although sometimes it is only a decision not to use that particular joke again…

Today I was telling the first year about reference and value types in C# and I learnt something as well. For those of you who are not familiar with programming in C# (and why should you be?)  this is all about how data is held in a program.

Once you get beyond programs that do simple sums you find yourself with a need to lump data together. This happens as soon as you have to do some work in the Real World™. For example, you might be creating an account management system for a customer and so you will need to have a way of holding information about a particular customer account. This will include the name of the customer, their address and their account balance, amongst other things.

Fortunately C# lets you design objects that can contain these items, for example a string for the name, a number for the balance, a string for the address and so on.  In fact, C# provides two ways that you can lump data together. One of these is called a struct (short for structure) and the other is called a class (short for class). These two can be hard to tell apart, in that the way that they are created is exactly the same. But they have one very important difference. Structures are managed by value, but classes are managed by reference.

Today is the point in the course where I have to explain the difference between the two.  I’ve got a routine for doing this which I’ve used in the past, and it usually gets there. If an item is managed by value (for example a struct) you can think of it as a box with a name painted on it.  If we move data between two variables managed by value:

destination= source;

- the result is that whatever value is in the source box is copied into the destination box. If my source is a structure value which contains lots of elements all of these are copied into the destination. This is how simple variables such as integers and floating point values are managed.

However, if an item is managed by reference the effect of the assignment above is different. You can think of a reference as a named tag which is tied to an object in memory. If I assign one reference to another:

destination = source;

- the result of this is that both reference tags are now tied to the same object in memory.  No data is actually copied anywhere.

At this point in the explanation I usually have a room full of people wondering why we bother with references. They just seems to be an added piece of confusion. Now that we have references we have the potential for problems when the equals behaviour doesn’t do what we expect.  Why do we have these two ways of working with data? Why can’t we just use values for everything?

My answer to this is that using references allows us to provide different views of data. If I have a list of customers that I want to order by both customer name and account number then this is not possible with a single array of values. But if I use references it is much easier. I can have a list of references which is ordered by name and another list ordered by account number.

So far I’m going by the slides. But then it occurred to me to go a bit further, and think about the use of reference and value types from a design point of view. If I’m designing a data structure for a  sprite in a game (for example a single alien in a Space Invaders game) the sprite will have to contain the image to be used to draw the sprite and the position of the sprite on the screen. I posed the question which of these two elements should be managed by value and which by reference.

After some discussion we came to the conclusion that it is best if the image to be used to draw the sprite is managed by reference. That means that a number of sprites can hold references to the same sprite design. You see this a lot in computer games, where a game has multiple identical elements (soldiers, cars, spaceships etc) it is often the case that they are all sharing a single graphic. However the position of the sprite on the screen is a value that should be unique to each sprite, we are never going to want to share this, and so the position should be a value type.

We then went through a bunch of other situations where an object contains things, and pondered for each thing whether it should be managed by value or by reference. Generally speaking we came to the conclusion that anything you want to share should be managed by reference, but stuff that is unique to you should be a value.

Of course references bring a lot of other benefits too, which we will explore in the next few weeks, but the thing I learnt was that the more you can show a context in which a particular language characteristic is applicable the more chance you have of getting the message across.

As a little puzzle, one thing we talked about was the storage of the address of a customer in our account database. Should that be managed by value or reference, and why?

Save Dalby Forest

Dalby Forest with Horse

Dalby forest is one of my favourite places in the country. We go there a couple of times a year with a packed lunch and just wander round the place. Years ago, when the kids were smaller, we used to go and have barbeques. It’s just a nice place with loads of trees and some lovely walks.

Dalby Forest Bridestones

And if the government have their way I won’t be able to go there much longer. They have this cunning plan to sell off, or lease, or give away, or whatever, the forests in the UK. This will save them some money and avoid them having to levy so much tax on very rich people. Or something.  It will also almost certainly mean that places like Dalby Forest will be out of bounds to folks like you and me.

Dalby Forest Path

I’m not a particularly political person. My theory is that whoever you vote for the government always gets in. I’m also very aware that there are much more important things out there than whether or not Rob has a nice place to go and have his picnics.  But I’m also aware that there are a lot of us packed onto this tiny little island,  and that the few really nice green spaces that we have left should be protected, not sold off for profit.

There is a petition you can sign up to if you want your voice to be heard on this matter. I’ve already done so. You can find it at the Woodland Trust web site:

http://www.woodlandtrust.org.uk

What Computer should I get for University?

Seattle Museum.jpg

We got an email last week asking what kind of computer works best at university. Here are my thoughts on the matter:

Netbook

Netbooks based on the Atom processor are very cheap and great for web surfing, email and writing essays but they are a bit underpowered for the more demanding stuff like image editing and HD video. While you can use large tools like Visual Studio on an Atom powered Netbook it will not be a particularly enjoyable experience, particularly if you only have 1G of RAM in the machine.  However, they are great for taking notes, very portable and their batteries should see out a day on campus if you are careful. And they are so cheap you won’t suffer an enormous loss if you drop or lose yours.

Laptop

If you are buying a laptop I would go for at least a Core 2 of some kind. Machines based on the i3 processor are becoming affordable and are worth a look. If you are buying a laptop make sure that it has (or you can upgrade it to) at least 4G of RAM. If you want to write games with the machine it really needs a separate graphics adapter, those with built in graphics might work, but their performance will not be good. Take a look here for details of requirements to write XNA games:

http://msdn.microsoft.com/en-us/library/bb203925.aspx

Such a machine need not cost too much, I got an Dell Studio 15 with ATI graphics for around the 600 pound mark last year, and I’m sure things have moved on since then. Of course the snag with buying a “proper” portable computer is that it is properly heavy and scarily expensive to cart round with you.   This might mean that it gets left back at your house most of the time, which kind of negates the purpose of a laptop.

You should also look very carefully at the battery life. Bear in mind that although there are some charging stations on campus these are the exception rather than the rule and so a machine that can last all day is a good plan. I used to have a rule of thumb that I would take the manufacturers’ claimed life and halve it, so a machine that was supposed to be good for 3 hours would actually give only 90 minutes. However, I think things are improving a bit. My latest little machine claims 9 hours of use, and pretty much gets there.

Desktop

I’m in the process of returning to my desktop roots at the moment. I moved onto a laptop a while back because I loved the idea of having all my data with me at all times. It meant that I could pretty much work anywhere.  However, I can now have my data anywhere by using Live Mesh and Dropbox, and I fancy having a go with two monitors, so moving back to desktop makes sense. If you are buying a desktop now you should take a look at the new Intel “SandyBridge” I5 processor, which is not that expensive and provides a big leap in performance terms. Such a machine with at least 4G of ram and a 1T hard disk  and a reasonable graphics card should come in at around that magic 600 pounds (if you shop carefully)  and will provide a big leap in performance over a laptop of similar price. 

Some students have a great big hulking desktop at home and carry a tiny cheap netbook around during the day to take notes. This can work very well, particular if you use one of the cloud services (see backup below) to keep everything synchronised.

Apple

Apple seem to have figured out what makes something a pleasure to own and use, and then bottled it and sold it. All their machines run Windows really well, although the native OS X operating system has a lot to commend it and gives you access to wonderful programs like Garageband which come free with each Mac. And of course if you have a Mac you can write programs for the iPhone. 

I would place a slight question mark over the reliability and longevity of their hardware though. My MacBook Pro has been through two batteries, a power supply and a main board since I got it, and my little MacBook is on its second battery. I've bought machines from lots of other suppliers, Dell, Sony, Toshiba and Acer, and never had this failure rate with them.

If you are in academia make sure that you buy using the Apple academic discount scheme, you will save a little money but you will also get three years of Applecare warranty, which is well worth having. 

Software

Don’t forget software when you are pricing your systems. All our students get Microsoft Academic Alliance usernames shortly after they arrive with us and you can get Microsoft Operating systems and development tools for free from this:

http://msdn.microsoft.com/en-gb/academic/default

The only thing that you will miss from this is Microsoft Office, which you can get quite cheaply from here:

http://www.microsoft.com/uk/education/studentoffer/

If you want to try Linux I’d recommend taking a look at Ubuntu, which provides one of the best turnkey Unix experiences.

Backup

It seems that you have to lose a big chunk of work before you appreciate the importance of making backups of your data. One of my project students had their hard disk crash the night after they had just finished writing a very important report. Of course they hadn’t backed up the files…. 

These days, rather than remembering take a backup I use Dropbox and LiveMesh to make sure that files on my computers are all synchronised. During a working day I’ll probably move between two or three different platforms and these technologies make sure that the data on all of them always lines up. They are also provide browser based interfaces, so that you can get at all your important files anywhere you can find a web connection. 

http://explore.live.com/windows-live-mesh

http://dropbox.com/

The main problem with these services is the limited amount of space they offer. Live Mesh will give you 5G of online storage for free, with Dropbox you have to make do with 2G for free, although you can have more if you pay. However, this is not an issue for me. I don’t put any of my music or video on them, I simply use them to store “work in progress”, which for all the taught content and presentations that I gave last year only amounts to around 2 or so gig.

Insurance

If you do buy lots of fancy hardware do make sure that it is insured. Sometimes home insurance needs to be modified to cover expensive single items and if you move away from home you may need to get a policy of your own to cover your gadgets.

Final Words

Don’t spend too much on a computer. You don’t need a huge powerful machine to do our courses at Hull, actually most of the work (apart from 3D game writing) could be performed on a fairly basic system costing less than 300 pounds. We do have machines on campus which you can use, including some really powerful ones in the games lab which are available to students who need a lot of horsepower. Remember that anyone who tells you that you need the most expensive and powerful system they have is probably a computer salesman….

Hull Digital Question Time

HDQT Pic

The view from the audience, from left to right Jon Moss in the chair, Imran Ali, Helen Philpot and Prof. Calie Pistorius, VC of Hull University.

I’ve just been to something really, really good. With free drinks at the end. Hull Digital Question Time was set up by Jon Moss and brought together a panel of experts to discuss the future of digital technology. I wasn’t sure what to expect, but the combination of interesting questions, a range of expertise from the panel and sensible debate from the audience made for a fascinating evening.  And then we all went to the bar..

I think the event has been filmed and it would make an absolutely great podcast, so with a bit of luck it will turn up in a downloadable form at some point in the future. In the meantime, if you get the chance to go to any events like this in the future (and I’ve already asked for another one) then you should jump at it.

One more thing, Jon told us that the date for the next Hull Digital Live event has been set. It is the 4th of November this year. Note it in your diary.

Processing Lots of Files in C#

4346439556

Elliot came to see me today with a need to process a whole bunch of files on a disk. I quite enjoy playing with code and so we spent a few minutes building a framework which would work through a directory tree and allow him to work on each file in turn. Then I thought it was worth blogging, and here we are.

Finding all the files in a directory

The first thing you want to do is find all the files in a directory. Suppose we put the path to the directory into a string:

string startPath = @"c:\users\Rob\Documents";

Note that I’ve used the special version of string literal with the @ in front. This is so my string can contain escape characters (in this case the backslash character) without them being interpreted as part of control sequences. I want to actually use backslash (\) without taking unwanted newlines (\n)

I can find all the files in that directory by using the Directory.GetFiles method, which is in the System.IO namespace. It returns an array of strings with all the filenames in it.

string [] filenames = Directory.GetFiles(startPath);
for (int i = 0; i < filenames.Length; i++)
{
   Console.WriteLine("File : " + filenames[i]);
}

This lump of C# will print out the names of all the files in the startPath directory. So now Elliot can work on each file in turn.

Finding all the Directories in a Directory

Unfortunately my lovely solution doesn’t actually do all that we want. It will pull out all the files in a directory, but we also want to work on the content of the directories in that directory too. It turns out that getting all the directories in a directory is actually very easy too. You use the Directory.GetDirectories method:

string [] directories =
          Directory.GetDirectories(startPath);
for (int i = 0; i < directories.Length; i++)
{
    Console.WriteLine("Directory : " + directories[i]);
}

This lump of C# will print out all the directories in the path that was supplied.

Processing a Whole Directory Tree

I can make a method which will process all the files in a directory tree. This could be version 1.0

static void ProcessFiles(string startPath)
{
   Console.WriteLine("Processing: " + startPath); 
   string [] filenames = Directory.GetFiles(startPath); 
   for (int i = 0; i < filenames.Length; i++)
   {
      // This is where we process the files themselves
      Console.WriteLine("Processing: " + filenames[i]); 
   }
}

I can use it by calling it with a path to work on:

ProcessFiles(@"c:\users\Rob\Documents");

This would work through all the files in my Documents directory. Now I need to improve the method to make it work through an entire directory tree. It turns out that this is really easy too. We can use recursion.

Recursive solutions appear when we define a solution in terms of itself. In this situation we say things like: “To process a directory we must process all the directories in it”.  From a programming perspective recursion is where a method calls itself.  We want to make ProcessFiles call itself for every directory in the start path.

static void ProcessFiles(string startPath)
{
  Console.WriteLine("Processing: " + startPath); 

  string [] directories = 
                  Directory.GetDirectories(startPath); 
  for (int i = 0; i < directories.Length; i++)
  {
    ProcessFiles(directories[i]);
  }

  string [] filenames = Directory.GetFiles(startPath); 
  for (int i = 0; i < filenames.Length; i++)
  { 
    Console.WriteLine("Processing : " + filenames[i]); 
  }
}

The clever, recursive, bit is in red. This uses the code we have already seen, gets a list of all the directory paths and then calls ProcessFiles (i.e. itself) to work on those. If you compile this method (remember to add using System.IO; to the top so that you can get hold of all these useful methods) you will find that it will print out all the files in all the directories.

Console Window Tip:  If you want to pause the listing as it whizzes past in the command window you can hold down CTRL and press S to stop the display, and CTRL+Q to resume it.

Links for Software Engineers

Pot Pourri

I was talking to our .NET Development Postgrad students and we decided that there were a few things that you should be familiar with if you want to become a “proper” Software Engineer. These are the things I think you should do:

Read “Code Complete 2” by Steve McConnell. Perhaps the best book ever on software construction.  Then keep your copy where it is handy, and have a policy of reading a bit now and then, just to keep up to speed. If you can track down a copy of “Rapid Development” you should read this to.

Read I.M. Wright’s “Hard Code” blog. And buy the book if you like.

Read “How to be a Programmer”. Excellent stuff.

This is not everything you should do. There are other good places to look. But it is a start. Oh, and if anyone out there has other ideas about good, pragmatic texts for budding coders, then let me know and I’ll add them.

Hull Digital

I’m really pleased to find out that there is now a Digital Community in Hull:

http://hulldigital.co.uk/

They are organising a live event in October which has some interesting speakers:

http://www.hdlive09.co.uk/

I’ve persuaded my boss to pay for a ticket, and I’m really looking forward to it. I’m pleased to find that they do student pricing for the event (which seems to me quite reasonable) and with a bit of luck we can involve some of our students in their events in the future.

One of the most important things about computing is that the field is constantly changing and professional development is something you really need to work at if you want to keep your skills up to date.  Hull Digital looks like it will be a neat way of doing this.

I M Wright Speaks

You’ve probably heard me go on about I M Wright before. He is the “Microsoft Development Manager at Large” alter ego of Eric Brechner. He wrote the book Hard Code, which is a wonderful look at how to create software properly. He also has a blog which is brilliant. And now he has a podcast too, so you can listen to the good word rather than have to read it. You can find the file here.

Bad/Mad Practice

Alfred Thompson had a good post in his blog about software testing. Alfred and I are around the same generation (I hope he won’t mind me saying this) and we’ve both written software for money in the past. When I was writing my largest projects I didn’t make use of any kind of tester particularly, I just make sure that it worked before I handed it over. Alfred was the same.

Nowadays it seems that there is a trend towards developers handing stuff over which they haven’t really tested, on the basis that the test people who receive it will find any mistakes they made. Alfred (and I) hate this idea. I put quite a verbose response to this effect on his post you can find here:

http://blogs.msdn.com/alfredth/archive/2009/01/27/how-not-to-develop-software.aspx

I’ve since talked to people in the business and was appalled to hear that this practice is not uncommon nowadays because developers are pushed to meet deadlines and the only way they can do this is by skimping on the testing they do. Ugh. I reckon this really goes back to Bad Management, in that a manager will get a good feeling if they are enforcing a strict regime with tight deadlines which the programmers are all hitting.

The end result though is that the testers keep sending stuff back for re-working because it has bugs in, the developers lose time on the next phase because they have to fix all these bugs, so they send the next version out (in time for the deadline) with more bugs and so on. The words Vicious and Circle spring to mind. Along with Bad and Product.

It turns out that one of my heroes, Eric Brechner, has written a lovely post about this that sets it out really nicely:

http://blogs.msdn.com/eric_brechner/archive/2009/01/01/sustained-engineering-idiocy.aspx

Developing the Future

I have just received a nicely printed document called “Developing the Future” from Allison at Microsoft UK Academic Alliance. This is a summary of a report produced  by the British Computer Society, software firm Intellect and Microsoft. The report is produced every year and takes a look at the way the UK Software Industry is going. If you are interested in the business I strongly suggest that you take a look at the summary. It makes lots of good points about the future. The full report is even more interesting (but is also 128 pages). Points that I took away were:

  • The UK is still a great place to start a software business, with access to venture capital, a good tax regime and a public who provide a ready market for new developments. (although you might get bought out by a large multinational company if you do well – which might not be too bad I suppose). Other countries are starting to compete though, with targeted incentives for particular fields – notably Game Development in France and Canada.
  • Whilst Small, Medium and Large software development companies are doing well, there has been a decline in “Micro” companies, with less than 10 employees (although the small company sector has got bigger – so perhaps the Micro companies are growing).
  • There is still a “Knowledge Gap” in the Software Industry. Although there are many Computer Science courses in the UK, some are having difficulty recruiting students and there is a feeling amongst employers that not all software graduates have an appropriate skill set. Which leads to a good jobs market for those that have.

I read this with my “Hull University, Department of Computer Science” hat on of course, and I like to think that the graduates we produce are useful and have good employment prospects. Past experience seems to bear this out, and (not wishing to blow our own trumpet or anything) the fact that we are presently ranked sixth in the country for graduate employment bodes well.

You can get the summary, and the full report from here:

http://www.microsoft.com/uk/developingthefuture/default.mspx

The Future of Computing

If there is one person who should have an idea of where computers are going it is Andrew Herbert. As Managing Director of Microsoft Research he gets to spend a lot of time thinking about the future of this business. I was very pleased that I woke up from my impromptu nap just in time to go off and hear his talk to Imagine Cup students where he gave a brief exposition of the way he thinks things are headed.

Very interesting. He made the very good point that even though computer use has changed massively since he started in the field, with personal computers now commonplace, and everyone carrying around huge amounts of processing power in their phones, cameras and laptop pcs, the processors inside these devices work in fundamentally the same way as the first ever computer. The rise of increasingly clever and friendly systems has been on the back of the continuous improvement in processing power that has made more advanced software possible.

The bad news is that the way we build solutions in the future is going to have to change, for two reasons.

Firstly we are running out of scope to improve the speed of computers. The processors themselves, and the memory they use, cannot be made to work faster in the future. Instead we are going to have to build systems which get performance by providing extra throughput from multiple processors, rather than a single chip that goes more quickly.

Secondly it is becoming increasingly hard to create and deploy software with the level of complexity that is expected today. Many large developments end up being abandoned just because we cannot produce something which solves the problem, or can be made reliable enough to be useful.

All this points to massive change in the way that computers will be programmed in the future, with a need to mathematically prove that crucial software always works, and new programming languages being created to allow code to make better use of the new arrangements of hardware that will become commonplace.

Programmers of the future will have to use different ways to express their solutions, and develop new techniques of building, documenting and proving the correctness of what they write. The model of computer use itself is also changing, with distributed systems being used to access large centralised services via the network, leading to even more change.

The great news is that nobody in the room seemed particularly scared by these prospects. They didn't seem to regard them as things to worry about, but as a whole new set of challenges and opportunities to make their mark and do great things, which is just as it should be.

If you are looking for a field where what you do can have the greatest impact on the largest number of people and how they live their daily lives, I think you will be hard pushed to find one more interesting than computing just now.

The Value of a Degree

Earlier this week there was a big feature on the UK MSN homepage about the value of a university degree. The central thrust of the feature was that a degree does not prepare you for the real world and leaves you only with an enormous debt and a huge hangover.

The article contained a link to a discussion where folks told tales of woe and how their hard earned qualification has not landed them the job of their dreams.

The way I see it (if you really want to know) is that if you decide you want some thing (such as the "job of your dreams") then you should plan a campaign which will get you it. A degree can be a useful part of such a campaign. But it is not the only one. You should find every other possible way to get there. Try to land some work experience in the area. Do things that broaden you out and make you more "interesting" to people working in the field.

If you want to be a games programmer, by all means do a degree in it, but also start writing little games and putting them on your games programming blog. Start contributing to forums about the field, asking and answering sensible questions. Get a job in the business, even if it is just working in a games shop. It all helps make you into a more enticing prospect.

Getting a degree and then expecting to be snapped up because of your evident brilliance will not work. In fact I don't think it ever did. When I did mine, all those years ago, when history was current affairs etc etc I remember being told that a degree is not a job ticket, but merely a licence to hunt....

Why not spy on the kids?

I've just done a piece for Radio Humberside which took as its starting point an attempt by an anxious parent to spy on the internet doings of their kids. It did not end well. If you want the thoughts of Rob on the subject, here they are...

I can’t really understand what all the fuss is about with these social networking sites. But then again, I’m almost certainly not supposed to. I write a blog, but that is just because I happen to like writing and my ego is so big that I think other people like to read it. Putting more stuff out there about me seems rather silly, but perhaps that is because I know what I'm like...

At the end of the day the internet is just another communication tool and another way that children (particularly teenagers) can make themselves different from parents. I think every generation does this one way or another. There were huge ructions when postcards were invented because for the first time they provided a quick and cheap way for people to keep in touch (which fathers and chaperones were probably not that keen on). Then it was the telephone, then the mobile phone and now the internet. All the way through the poor parents had to watch their offspring employing new media and devices to communicate. I guess mum and dad just ended up gritting their teeth and trusting that their kids are going to do the right thing, which is probably the best plan.

Using all these wonderful new toys should not be a problem, but just like you’d probably ration someone who wanted to play football all the time and not do any school work, you should do something similar with computer time. And, whilst it is never a good idea to “go under-cover” and spy on your children/young adults (not going to encourage trust across the generations) I think that if you suspect that something is going on which is causing your kids unhappiness then it is important to try and find out more.

Whatever you do, don’t move in just to try and get “down with the kids”, this is pretty much doomed from the start. Good advice, such as not giving out personal details, steering clear of strange web sites and never running programs that you’ve just downloaded are always important though. This should be taught in the same way as we teach road safety. Learning a bit about the computer is also very good plan. Find out how you can make sure that your system is up to date. Discover how to take backups regularly so that important work doesn’t get lost and you can recover from nasty virus infections. If you can make yourself the family “computer guru” that would be a very nice place to be.

Something which is important is that everyone needs to understand that anything that you put out there is visible to everyone, for all time. Even if you take down those snaps you took at a party for a laugh, they may have been copied already, possibly by one of your "friends". And you don’t want to apply for a job and find that your web personality from ten years ago means that you don’t even get an interview.

Remember that employers are frequently using Google to check up on applicants. I would definitely Google someone who wanted to work for me and I would expect anyone thinking of hiring me to see what they could find out about me in the same way. I never put anything on the web that I would be unhappy about anyone reading. Even my emails are censored like this. You just never know where the data might end up one day.

Of course another thing about the internet is that you can create completely false “alter egos” which let you be anyone you like for a while. I’m not sure why you’d want to do that, but we’ve already established that the point of these things is lost on me anyway. I think that in the future we are going to see a need for people to have a slightly more solid internet persona. For example, if you want to bid on eBay for something you find that many people won’t deal with you unless you have some transaction feedback. That requires a tie back to a concrete identity with proper email and payment technology. Maybe in the future it will be harder to hide behind a fake self that you’ve created, which is probably a good thing in the long run.

I suppose at the end it all boils down to trust, you trust your kids to do the right thing, and they trust you in that they feel happy to tell you when things get tough.