Are you really going to argue that if you encountered this interface [...] you'd be unable to write a function that uses it correctly unless you read how the instance implements it? |
Consider the following implementation:
1 2 3 4 5 6 7 8 9
|
class Derived: public MyInterface
{
public:
const std::string &get_name() const override
{
static std::string name = some_function();
return name;
}
}
| |
Just by looking at the interface, I can't say whether or not that function provides exception-safety guarantees.
I can't tell you whether or not there are side-effects. I can't tell you if I can call it from multiple threads. If I care about any of these things, I have to go look it up. In general one can't just use a function blindly and ignore everything that could go wrong.
In any event, if a derived class might implement a non-trivial override, this wouldn't satisfy the criteria as a trivial interface.
Your argument was "these two pieces of code do the exact same thing, but one of them more fully defines its semantics, therefore it's better" |
My argument is "Choice A introduces a member-function which provides no immediate value. Choice B doesn't, but still accomplishes the same thing; Choice B more fully defines the semantics, therefore it's better."
Can you explain why [your argument] couldn't be applied to the more general case of general encapsulation and data hiding? If having more thoroughly-defined semantics is inherently better, then unconditionally and fully exposing the implementation must be the perfect solution. |
Generally, I might claim that no abstraction is always better than one that doesn't provide any current value. But it is not clear that no abstraction is better than one which does provide value.
For example, member functions which maintain a class invariant are a valuable abstraction, because it prevents clients from having to remember and maintain those invariants themselves. Of course a client which is accessing the implementation directly assumes that burden.
In 2002 Joel Spolsky wrote about a "law of leaky abstractions", which supports this case pretty well. His idea is that every abstraction can be made to expose its underlying implementation, at least to some extent. When this happens, the programmer needs to care about it.
https://www.joelonsoftware.com/2002/11/11/the-law-of-leaky-abstractions/
I will concede that the payoff of getters and setters is unusual, but if you want to argue against them, you need to make the case that the combined cost of adding them is greater than the benefit they add, and/or the risk of not adding them when you did need them after all. |
Indeed I do, but I'm not sure this can be quantified, so we're stuck debating ;).
My suggestion would be to continue to add getters and setters, but only judiciously where you feel a payoff is likely. Otherwise you're paying for something you don't need.