Up to version 10, the regular window version has a maximum of  215 – 1 = 32 767 variables  and a maximum of  231 – 1= 2.15 billion cases. The student version is limited to 50 variables and 1,500 cases.

Starting with version 10, the limit on the number of variables has been removed, the only "hard coded" limit is 2.15 billion variables. This does not mean that it would make sense to work with millions of variables. It is always more efficient to load only the variables you currently need. 

Here are additional points made by Jon Peck in his 06/05/03 posting to the SPSSX-L list:

In calculating these limits, count one for each 8 bytes or part
>thereof of a string variable.  An a10 variable counts as two
>variables, for example.

>Approaching the theoretical limit on the number of variables,
>however, is a very bad idea in practice for several reasons.

>1. These are the theoretical limits in that you absolutely cannot go
>beyond them.  But there are other environmentally imposed limits
>that you will surely hit first.  For example, Windows applications
>are absolutely limited to 2GB of addressable memory, and 1GB is a
>more practical limit.  Each dictionary entry requires about 100
>bytes of memory, because in addition to the variable name, other
>variable properties also have to be stored.  (On non-Windows
>platforms, SPSS Server could, of course, face different
>environmental limits.)  Numerical variable values take 8 bytes as
>they are held as double precision floating point values.

>2. The overhead of reading and writing extremely wide cases when you
>are doubtless not using more than a small fraction of them will
>limit performance.  And you don't want to be paging the variable
>dictionary.  If you have lots of RAM, you can probably reach between
>32,000 and 100,000 variables before memory paging degrades
>performance seriously.

>3. Dialog boxes cannot display very large variable lists.  You can
>use variable sets to restrict the lists to the variables you are
>really using, but lists with thousands of variables will always be awkward.

>4. Memory usage is not just about the dictionary.  The operating
>system will almost always be paging code and data between memory and
>disk.  (You can look at paging rates via the Windows Task
>Manager).  The more you page, the slower things get, but the
>variable dictionary is only one among many objects that the
>operating system is juggling.  However, there is another effect.  On
>NT and later, Windows automatically caches files (code or data) in
>memory so that it can retrieve it quickly.  This cache occupies
>memory that is otherwise surplus, so if any application needs it,
>portions of the cache are discarded to make room.  You can see this
>effect quite clearly if you start SPSS or any other large
>application; then shut it down and start it again.  It will load
>much more quickly the second time, because it is retrieving the code
>modules needed at startup from memory rather than disk.  The Windows
>cache, unfortunately, will not help data access very much unless
>most of the dataset stays in memory, because the cache will
>generally hold the most recently accessed data.  If you are reading
>cases sequentially, the one you just finished with is the LAST one
>you will want again.