#include<iostream>
#include<memory>
int main(){
/*std::unique_ptr<int []> p(new int[5]{1,2,3,4,5});
for(int i=0;i<5;i++){
std::cout<<p[i]<<"\n";
}*/
int *p=newint[5]{1,2,3,4,5};
for(int i=0;i<5;i++){
std::cout<<p[i]<<"\n";
//std::cout<<*(p++)<<"\n"; //this changes p
}
delete []p; //this isnt the p that was newed...
p=nullptr;
return 0;
}
Also, since p goes out of scope at the end of the function, there is absolutely no point to setting it to null after the deletion. [In other cases, if you have a long-lived object in a wider scope (class scope, for example), you might want to set it to null if you are afraid of double deletions.]
Also, we would be remiss if we did not mention that this is something you shouldn't do at all. If it's just for educational purposes, sure, but for real code, C++ provides vector<int> or array<int, size> , depending on your needs.
Depends what you define as real code. Win32's WASAPI uses pointers that need to be deleted. It's the only low latency audio library from Microsoft that is still compatible with Windows 7. WPF doesn't have low latency audio and UWP only works for Windows 10.
This means, if you want to target the large audience not using Windows 10, you have to be comfortable with deleting pointers.
Nitpicking, but in most cases it's easier to simply create unique types to perform deletion:
1 2 3 4 5 6 7 8 9 10 11 12 13 14
struct com_deleter
{
template <typename T>
voidoperator()(T *ppT) constnoexcept
{
if (ppT) ppT->Release();
}
};
template <typename T>
using com_unique = std::unique_ptr<T, com_deleter>;
// This assertion should hold in a decent implementation:
static_assert(sizeof (std::unique_ptr<int, com_deleter>) == sizeof (int*));
Because com_deleter is an empty type, empty base optimization is applied, such that (typically) sizeof (std::unique_ptr<T, com_deleter>) == sizeof (T*). This eliminates extra overhead, and there's no need to initialize the deleter.
Indeed, and if someone used such code and didn't to make it safe(r) using modern C++ techniques (as suggested by Helios), I'd fail that code review and send it back; dollars to doughnuts, in the future someone else would work on that code and even if it was all correct when they started, they'd miss something or add something that could throw an exception or add a new set of return paths or something, something, something will go wrong with it in the future, that would all have been avoided if only they had made it properly safe at the start.
So by "real code" I suppose I mean code that isn't embarrassingly dangerous and that won't blow up in some future maintainer's face. We might be stuck using legacy APIs that today would be considered toy code for demonstration purposes, but we can still add improvements to make it "real code".
if someone used such code and didn't to make it safe(r) using modern C++ techniques (as suggested by Helios), I'd fail that code review and send it back
At my university (and I assume most of them), if you send them code with smart pointers and such for safety, they'd also fail it!
I expect so, yes. As I understand it, university courses named "Computer Science" or similar are generally not about teaching people how to be an effective software engineer; using effective software engineering techniques to sidestep whatever (dangerous in practice) theory is being demonstrated will simply miss the point.