memory complexity sounds like a term for how much memory your program uses.
for example, if you wanted to give that in bytes,
in the above example your memory complexity is N*8, so if you had 10 integers, you use ~80 bytes, if you have 10000, you have 80000 and so on. I don't know if this term has a formal notation like big - O, it probably does.
The above is important due to the time space tradeoff. For example, to bucket sort 64 bit integers, you need 2^64 X sizeof(some integer type). So if you could have 0-255 repeats of any one value, that would take 2^64 bytes of memory. Most machines don't have that much, so the O(N) sort of 64 bit numbers is too heavy on the time-space tradeoff: you don't have the space available to get that time, and need to use a slower, but less memory hogging, algorithm. But if you were sorting billions of 1 byte values, that is only an array of 265 X 8 bytes, which will fit on any computer. So you can do that one .... see how this is useful info?
A counting sort could be considered a subclass of bucket sort, but it is sufficiently unique that we consider it differently. Both algorithms have their own FAQ page:
Large repetition counts in the {n,m} construct may cause grep to use
lots of memory. In addition, certain other obscure regular expres‐
sions require exponential time and space, and may cause grep to run
out of memory.
Back-references are very slow, and may require exponential time.
depends on what are you counting? the stickers? The sides? there are 9 stickers per side on the standard one, X 6 sides...
or space needed to be solving it?
A counting sort could be considered a subclass of bucket sort, but it is sufficiently unique that we consider it differently. Both algorithms have their own FAQ page:
I learned this stuff before there was a web. Back when I picked this one up, I learned it as bucket... don't know if they had named it back then, or if the teacher knew the special case had a name. Anyway, point was the space consumed vs time taken. Honestly, we spent the most time on shell sort. Probably why it became a favorite... loved the weird analysis of it. I remember leaving the machines on overnight to get the run time of 1000 items in a bubble sort.
Sounds like whoever taught you the sorts was either a little unsure of the differences himself/herself or was rushing through it. Both sorts have existed for ages, and are very, very thoroughly covered by Knuth.
Also related is the radix sort.
@salem c
I disagree about the card games thing — card games often work on the suboptimal.
For example, Go Fish: Not every card you ask for is going to be available in another player’s hand — you will have to draw from the pool, and even toward the end of the game you are unlikely to draw the desired card. I don’t think you could construct an O(n²) scenario, but I don’t think you are likely to have anywhere near an O(n) scenario either (though I suppose one could be constructed). Without actually computing it, I would guess somewhere in ω(n log n), o(n²). Yes, that’s little O and little Ω.
For a game like Solitaire (say, Free Cell), it is even worse. As you build up the stacks, you have to touch each card in a stack just to move it around. This is definitely an Ω(n²).
Back to sorting algorithms, sorting a deck of cards is a very convenient way to think about sorting stuff. Particularly when you get into bucket sort and variations (like radix and counting). Again, you find that the algorithmic complexity is not going to be O(n) except in the single case of those sorts that recognize an already-sorted sequence after the first pass.
both :)
It was a high-school teacher, and we rushed it pretty hard. we did a full near college level datastructures & algorithms class + learning the language over 1 year. No OOP, though, was in pascal of all things... my first year of college was... dull due to this.