I can't see how using an ARM is different than using some other processor from a C++ programming point of view. The arm32 toolstack has a few quirks, but from a high-level language point of view, it doesn't matter.
You could go for a full computer with a familiar desktop OS on it, along with all the usual tools to write and run programs there and then. https://en.wikipedia.org/wiki/Raspberry_Pi
One cannot answer such question in general. There are surely some traps with C++, but in the end it all depends on you. You can write good performing code in C++ and horrible slow code in C. https://9appsapk.vinhttps://vidmateapp.vinhttps://vlc.vin
I do not know this system, but in general, some embedded type computers use custom language extensions, very specific libraries, and so on making the code a little different from pure c++, just as a windows gui program has some specific things it needs, etc. So you may have to know not only the c++ but some common, very specific libraries as well (on top of any other libraries you need on the side).
Also some embedded systems speak slightly altered c++, a notorious example is the Arduino family which uses almost, but not quite c++. Other processors may use c++98 (no stl at all) or other older versions. Some do not support floating point, recursion, threading, or other things due to limitations.
I can't see how using an ARM is different than using some other processor from a C++ programming point of view.
Well isn't that one of the points of a high level language? It's supposed to let you make code that can be ported between architectures. In theory, if you write a C++ program that has no undefined behaviors then it should run on any system that it can be compiled for.
In practice, there are often subtle things that are undefined, or defined in a way that can be subject to interpretation. There was a great example in the early 90's. If I recall correctly, a large company changed their malloc() implementation so it would always succeed if there was room in the process's virtual address space. But it wouldn't actually attempt to allocate the memory until you tried to access it. They argued that this odd behavior complied with the specs for malloc() but developers howled at the idea because all code assumed that if malloc returned non-null, then the memory pointed at was accessible.