Is usually wrong, because the code between the first new and the last delete is likely to be potentially-throwing.
You don't need the new keyword to create objects. You should only ever create objects with dynamic storage duration if the object's lifetime must be separate from the scope:
1 2 3
while (game.isRunning()) {
Player player; // preferred, correct
}
If you can't manage the above because you need dynamic storage duration, you should use std::unique_ptr instead of plain-old new:
1 2 3 4
while (game.isRunning()) {
auto player = std::make_unique<Player>(); // not preferred, correct
// ...
}
When a function is called, the stack is enlarged by modifying the stack pointer. For this reason, stack allocations only rarely require more than a single instruction per function.
Allocating from the heap at best requires a call to the CRT, but at worst requires the kernel to do basically arbitrary amounts of work to find and allocate enough contiguous empty pages.
Further, memory obtained by built-in operator new() is not likely to be as cache-friendly.
Depends on he compiler; there is a quality of implementation issue.
A good optimiser may elide dynamic memory allocation (this is permitted by the standard).
int gv = 0 ;
struct A
{
// constructor has a side effect
A( int v = 0 ) : value(v) { ++gv ; }
int value ;
} ;
int foo()
{
int result = 0 ;
for( int i = 0 ; i < 10 ; ++i )
{
A* pa = new A {i} ;
result += pa->value ;
delete pa ;
}
return result ;
/* compiled by clang++ as-if
gv += 10 ;
return 45 ;
*/
}
Yes, and there are a lot of issues involving memory allocation that this touches on.
First, the memory allocator isn't free. It's very cheap compared to garbage collection in a language like Java or C#, but it is not free. Answers to the question of "how slow" depend heavily on the compiler/stdlib/runtime libraries, but compared to stack allocation it's going to be quite slow.
Second, general memory allocators are often very bad at memory layout. When you use the new operator, it's using a memory allocator that must work in all situations, whether you're allocating 100,000 objects of 1 byte or 1 object of 100,000 bytes. Since it doesn't know ahead of time what you'll be allocating, if you over-use or use badly the memory allocator you can run into problems like memory fragmentation, where there are holes in the memory map that are too small for most allocations and negatively effect performance. I even read an article about a strategy game where after a few hours the game would crash reliably, and they finally traced the problem down to memory fragmentation. There was still plenty of memory free, but no space left to allocate it.
However, I can't think of many uses for doing something like this. I don't do gamedev in C++ (yet?), but at least in Unity the mantra is to pool everything. As soon as your game is running you want 0 allocations unless necessary. Now there it's because the garbage collector is slow and makes you drop frames, but all the problems I mentioned above are also solved by proper object pooling and avoiding allocations. C++ is also much more flexible in this regard, allowing you to choose where you want to put objects and even allowing you to design your own memory allocators that can alleviate a lot of these problems.