I have actually sat this exam myself and my flatmate in fact won the scholarship, so I feel I am in a unique position to be able to offer advice!

Before I go any further though, I just want to say a few words about the exam's purpose. The scholarship money is a nice bonus and a fairly significant chunk off your student loan, but this is really mostly to decide where you belong in classes with respect to prior knowledge. Don't stress about putting too much effort into learning concepts that are totally new to you. In your first year of the degree you only do two compulsory introductory computer science papers, one in each semester, and your performance in the exam will determine whether you are ready to skip either the first paper or both papers, and move straight into some second year papers instead. Something else worth noting is that this won't make your degree shorter or easier (in fact, it will probably make it a little bit more work if you skip straight into higher papers) but you'll be in a position where you're able to get much more out of it if you're ready to; you still have to take the same number of points towards your degree as everyone else, so it just means you will be able to choose more papers at higher levels. I'm nearly at the end now and I still feel like I have so much more to learn, even though I was able to skip the first introductory paper, so that's a very cool opportunity. You'll be awarded the scholarship if you're ready to skip both introductory papers (so you don't have to actually get the scholarship to skip past the first introductory paper).
Secondly I want to note that this is by no means an easy exam. It's meant to cover all the concepts that are taught in first year computer science, and although they start from the very beginning, they're not things you would have ever been taught in school (unless they've significantly improved the curriculum in the past few years!).
As you can probably see from those questions, some of them are really general, and others are a lot more specific. The general questions (e.g. about operating systems, choosing different computing platforms) are just meant to stretch you and there really aren't any textbook answers for them. In a sense, they probably just want to see that you have a passion for the subject. If you know something about Linux or even OS X that makes either of those OSes more appealing for a programmer to use than Windows, for instance, drop that in, even though it's not specifically what the question asked for. If you want to talk about phones and tablets vs desktop and laptop computers, or how mobile operating systems differ to desktops, mention some cool random fact from Apple's WWDC or the Google I/O keynote from last night/this morning. I would answer the question in general terms as best you can, and then try to add in something that will make you stand out from everyone else taking the test.
The rest of section A will likely be about binary numbers as in the 2013 exam (because that's kind of the only theory you get taught in the first intro to CS paper). To this day I still hate doing binary number conversions and even though I remember studying for them when I sat this test, I screwed them up in the test itself.

Realistically, all you'll ever have to do in practice is convert binary numbers to decimal numbers and vice versa. You'll have the whole concept of binary slowly drilled into you in your courses, but for now all you need to know is the basic binary format. For anything more advanced (like adding/subtracting/multiplying binary numbers in place, i.e. without intermediate decimal conversions) there will be swathes of resources across the internet. I'd maybe start by watching YouTube videos.
The basic unit of binary and every non-quantum computer system everywhere is the bit, which has the value 0 or 1. Computers usually process bits in bigger units like bytes, which are chunks of 8 bits. A "word" in computer talk is the natural unit size used by a computer system, which these days is generally 32-bit or 64-bit. When you're programming, you'll mostly only ever need to deal with primitive types that are 32 or 64 bits long. However, the thing to take away from this is that while binary numbers are usually represented in blocks of 8 bits, they can be pretty much any length.
So, the most obvious binary numbers are zero and one, because their values are the same as in decimal (0 and 1). But as soon as you want to count up to 2, things start getting tricky because in binary there isn't a single digit for the number 2. Binary is "base 2", which means that it consists of 2 single digit values, 0 and 1. Decimal is "base 10" because it consists of 10 single digit values, 0 to 9. What happens when you go past the 10th digit (9)? The number becomes 10. In the same way, when we count up from 1 to 2 in binary, the number becomes 10 (but we wouldn't call that ten, it's now just one-zero). Now as we count upwards, we just keep going. The number 3 is 11. The number 4 is 100. The number 5 is 101, the number 6 is 110, 7 is 111 and 8 is 1000. In decimal, every digit added to the number means that the number has been multiplied by 10. In binary, it means that the number has been multiplied by 2. So you can calculate any binary number by knowing that each digit represents a number:
0 1
0 2
0 4
0 8
0 16
0 32
0 64
0 128
This is getting hard to explain in writing and it's tricky to wrap your head around at first so I'd recommend watching videos on it for more info.

Something else to take away from this concept though is the idea that computers only work with finite numbers. For instance, numbers that are 8 bits long aren't very big! They can only represent 256 possible values, which could be the numbers from 0 to 255 (128 + 64 + 32 + 16 + 8 + 4 + 2 + 1) or -127 to 127 (I think that's rightish). We don't usually use data structures this small, but a more common problem is the fact that 32-bit structures can only represent up to ~4 billion values, so you have to be careful when you're writing code to make sure that the numbers you're working with will never exceed the maximum or minimum bounds of the data structures they are stored in. The 32-bit limit is also the reason why there aren't enough IPv4 addresses to accommodate the human population. 64-bit data structures are usually sufficiently large to solve most problems.
As for programming, I'm not 100% sure what to tell you because the uni are currently still tossing up which language they're going to teach to first years next year.

For the past several years, they have taught C# in first year and Java in the second year (both very similar languages), but I think the primary contender at the moment is the Python language, which is really powerful and extremely simple in comparison to the other two. I would wholeheartedly recommend Python going forward, as I think most people do, if it weren't for the fact that in most cases it (rightly so) abstracts away some of the more difficult aspects of programming, such as working with complex data structures and the nitty gritty of working with various types (integers, strings, etc), which at some point you will still have to learn about. Shane is spot on in his Informer article about some of the benefits of Java to ensure that you maintain a high degree of theoretical knowledge of what's going on behind the scenes when you're coding, and that's what I'm referring to when I say that Python abstracts that away (so does PHP). However, it's an ideal language for beginners and advanced programmers alike because it's super powerful and lets you write concise code very quickly, and I think the examiners would be thrilled if you wrote your programs and code fragments in Python. Definitely more so than PHP, anyway, which is what I used when I sat the exam.

You can use any language you like, but if there is one thing I would recommend, it would be to get as familiar as you can with compiling and running programs from the command line. Python and Java (and even PHP, I suppose) are great for this, but C# not so much. When I sat the exam they had a question that involved reading user input, which I didn't know how to do from the command line at the time, so I wrote a PHP web application instead. It worked fine, but it took me a lot longer to write than it would have done if I'd known how to do it from the command line (I still wouldn't know how to do it actually, because no one writes PHP command line applications, hahaha). Java is quite complicated to execute on the command line if you haven't had a fair bit of experience with it, so I would definitely recommend looking into Python. The more I think about it, the more it seems like the natural choice.
Shane's ideas in his Informer link for simple programs you could write to get the hang of programming are excellent and it looks to me like they cover all of the basic things you'll need to know.
Anyway, I'll shut up now cause I need to get back to an assignment haha. I get way too easily carried away with all of this exciting stuff. If you have any more questions please feel free to fire away though!