Most people complain about exceptions. They then proceed to complain that exceptions are the only way to error check if an object has failed, giving them the feeling of it being forced on them. It may be the best "best" but it's definitely not the only way. One of the most popular methods around this is two step construction.
1 2 3 4 5 6
class Example
{
Example();
bool Create();
bool bOkay;
};
I personally think it's counter intuitive. No matter though, it's applied in places where we may even want to use exceptions such as company code. You'll often have questions such as, "Is it beneficial in any other way?" or "Is there any time I'll want to do this over one step construction?". The answer to both is I don't know. Create does not allow for the optimization of a ctor initialization list where the ctor itself does. It always has basic function overhead since more calls are being made. One could argue that the overhead exceptions causes is greater than what is caused by a two step constructor. This is a valid argument. Exceptions are really heavy when it comes to throwing them. However, it's my belief that the benefits of exceptions outweigh the costs. I wouldn't actually know, I've never cared to count the cycles or benchmark it.
With today's hardware, I can't really see many performance-critical applications that using exceptions for is a deal-breaker. Most of the time, I think it comes down to personal preference.
Another option that I've seen a lot of people use is something like this:
1 2 3 4 5 6 7 8 9
class Example
{
public:
// Returns a null pointer if instantiation fails.
static Example* NewInstance(void);
private:
Example(void);
};
* Personally, I always use one-step construction. I don't like the idea of having my objects floating around "in limbo" between construction stages.
Connection c;
ErrorCode e;
e = c.Connect("myserver");
if(e != OK)
return ProcessError(e);
e = c.Login("myuser","mypassword");
if(e != OK)
return ProcessError(e);
Table t;
e = c.GetTable("Foo",t);
if(e != OK)
return ProcessError(e);
Record r;
e = t.GetRecord(t,"Desired Record",r);
if(e != OK)
return ProcessError(e);
r.ModifySomehow();
e = t.UpdateRector(t,"Desired Record",r);
if(e != OK)
return ProcessError(e);
Lots of C libs work this way. WinAPI and DirectX specifically force you to check error returns after damn near EVERYTHING, so your code ends up looking a lot like this.
The idea is that you can write code that assumes everything is working fine, without having to check your status after every little thing. In the event that something disruptive happens, it'll jump right to your error handling code.
I know and it's awesome. In C though, there's often macros that wrap around API functions like "APICHECK(myAPIFunction(structBlahblahblah));" This is done with OpenGL inside of SFML and various other areas in C.
helios, I understand your point but when someone says, "What do I do when my constructor fails?" most will say "Throw an exception." When they say, "What do I do if I can't throw an exception due to rules outside of C++?" then most would say, "Two step constructor".
In my opinion, the idea behind using exceptions is that in well-developed code, exceptions will rarely occur. This means that you have faster code from not checking for errors, and that even if an exception does cause some overhead, it should be rare enough that it doesn't even matter. Whereas for work-arounds and other nonsense methods, you have slower code and much more tedious programming work.
I would recommend to read C++ Coding Standards by Sutter/Alexandrescu. There, the authors claim that there are a couple of valid uses of exceptions:
1) To handle exceptional cases -- such as /bin/bash not being accessible/executable.
2) To propagate errors where C++ syntax provides no other means, such as from constructors and many of the operators.
They further say that exceptions should NOT be used if it is the expectation that the immediate caller will always handle the error. If this were the case, the programmer would end up littering his code with try-catch blocks that just increase the amount of code he has to write.
Yes, it is possible to implement operator+ such that the programmer needs to call an "Ok()" method on the resultant object to see if the operation succeeded, but doing so basically erases the benefit of the syntax provided by operator+. At that point, you may as well eschew the operator and write the method "bool add( Thing& result, const Thing& rhs ) const". At least in this way, you can get a compiler warning if the result of add() is ignored.
Two-step construction has the same problem: nothing forces the programmer to call the second function to complete initialization. As the implementor of the class, you have two options. Either claim GIGO in all your methods, and eschew any safety checks, leaving the user with weird crashes to debug, or provide checks in your methods to detect a partially initialized object and do something about it. Unfortunately these additional checks incur a runtime penalty, all in the name of protecting the user against himself.
But it can be worse than that: you may have to be very careful about destruction of the object. It's always best to provide a strong exception guarantee (in this case without actually throwing an exception) so that the object can be safely destroyed and any resources it holds can be released. This may mean making the destructor a little more complicated.
At least GCC provides you with a way to markup your function declaration so that if the user ignores the return code, you'll get a warning. Just try making a simple program that opens a file, does a write() on it and ignores the return code. With -Wall, you should get a warning.
[EDIT: You may need a "newer" GCC for this. Something in the 4.x series at least, if not 4.1+]
The idea is that you can write code that assumes everything is working fine, without having to check your status after every little thing. In the event that something disruptive happens, it'll jump right to your error handling code.
The theory is nice, but in practice if you coded this way, your code would be usually exception-unsafe - which means if it fires exception, you have 99 chances out of 100 that it leaks resources.
The main reason why companies (like Mozilla or Google) ban exceptions in C++ is manual memory management, not performance issues. If you use exceptions and you don't want to cause a memory leak, you have to make sure that:
1. The code for resource cleanup will be fired properly.
2. The resource cleanup won't fail.
3. The resource cleanup won't fail even if the object is partially constructed (if your constructors throw exceptions, this is quite easy to get into).
The first one can be achieved by RAII and smart pointers (and they are order of magnitude slower than builtin pointers), the third one can get really hairy, though, and there is no universal solution for this. So in reality, no, you cannot code with assumption everything works fine. You still have to take into account both scenarios - that everything works fine, and that an exception can be thrown out of nowhere :D
It is true that boost::shared_ptr is much slower than built-in pointers, however there are lighter-weight alternatives that may be better fits in many circumstances anyway. For example, boost::scoped_ptr, std::auto_ptr, and std::unique_ptr. shared_ptr should be used only when ownership must be shared.
2 and 3 are problems regardless of how you report errors.
It is always recommended to use managed pointer objects rather than raw pointers since doing so all but eliminates memory leaks, even in programs that don't use exceptions. This is why advocates of exceptions regard the argument of "many potential exit points of a function (due to exceptions being thrown) as moot.