Thursday 24 September 2009

Generic programming in C++: general stuff

Much has been said about C++ generic programming, some things good, some things not. In this post, which is the first in a series devoted to C++ generic programming, I would like to dig in the role templates play in the C++ programmer toolset.

C++ is a multi-paradigm language, supporting procedural, object oriented and generic programming, but most of the C++ projects I have ever seen use basically OOP, plus a bit of generic programming just enough to deal with STL and/or BOOST or to implement not more than containers of T 1.

I see a big gap between C++ hardcode practitioners and the average C++ programmer concerning templates. While most C++ programmers can use runtime polymorphism with no problem, they are hardly able to code a simple container-of-T by themselves. On the other hand, C++ experts tend to evangelize the goods of template metaprogramming. Why is generic programming such an infrequent tool in the daily coding? What do C++ gurus see in the use of templates that other people "miss"?

The main problems why -in my opinion- templates tend not to be used are:

  • Lack of knowledge on the templates mechanism and on its syntax, related to the fact that most mainstream languages do not have the capabilities for generic programming C++ does.


  • Inherent difficulty to get it right with templates: as everybody knows, errors in code that uses templates can be a nightmare to decypher, and the more you dig into generic programming idioms, the longer the error messages you get.


  • They break encapsulation: templatized code imposes implicit (sorry, no concepts in C++0x) constraints on the template parameters (e.g. if the templatized code uses the template parameter's copy constructor, every instantiation will require such constructor from the parameter) Thus code that uses the template, has to know its implementation to properly use it. Failure to provide template parameters that match those implicit expectations is the most frequent source of errors in code using templates.



In the following posts, I will try to explore the very basic template syntax, template specialization common uses, usual generic programming idioms, and some metaprogramming spells...


1. The best test I came up with to challenge my opinion is to google code search the key terms for polymorphism and generic programming in C++...
search: "template <" lang:c++ --> 339000
search: "virtual " lang:c++ --> 2130000 results
If anybody thinks there is a better way to measure it, comment please!

Saturday 28 February 2009

Managing object ownership in C++ with auto_ptr

I really like garbage collected languages, they make you feel safe, confident for not having to explicitly manage memory...but at some price: expressiveness on object ownership and lack of control of objects lifecycle; when an instance of a class is created, no one but the garbage collector is responsible for taking care of that object.

We are fine as long as we are only dealing with memory, but lacking such ability to communicate responsibilities is a pain when resources (e.g. files, locks, connections to external systems) come to scene.

C++ has mechanisms to express responsibilities a class has over other classes and to control when objects are destroyed. This first of a series of post in which I want to write about such mechanisms; today it's the turn for std::auto_ptr.

Smart pointers are classes that mimic "classic" pointers by overloading operator* and operator->, but provide "extra features" (e.g. memory management, locking) to their pointees.

std::auto_ptr is a smart pointer included in the Standard Template Library; its main features are:
  • it owns its pointee, which means that when an auto_ptr is destroyed, so is its pointee.

  • it can transfer the ownership of its pointee to another auto_ptr.


This is a simplified version of auto_ptr:

namespace std {
template <class X>
class auto_ptr {
public:
explicit auto_ptr(X* p =0) : pointee_(p) { };
auto_ptr(auto_ptr& rhs) {
pointee_ = rhs.pointee_;
rhs.pointee_ = 0; // relinquish ownership
}

~auto_ptr() { delete pointee_; }

X& operator*() const { return *pointee_; }
X* operator->() const { return pointee_; }
private:
X* pointee_;
};
};

Notice how pointee_ is destroyed in the destructor of auto_ptr and how ownership of pointee_ is transfered in the copy constructor.

auto_ptr's power arises when it is used with value semantics, that is:
  • as a local variable (stack allocated): when the auto_ptr goes out of scope, it is destroyed and so its pointee is deleted, avoiding any explicit memory management on our side. This still holds true in presence of exceptions:

    void doStuff1 () {
    MyClass *myClass (new MyClass);

    // some dangerous stuff here that likely
    // throws an exception, leaking *myClass

    delete myClass;
    }

    void doStuff2 () {
    auto_ptr<MyClass> myClass (new MyClass);

    // some dangeous stuff here that, even
    // upon exceptions, does not cause leaks

    // no explicit delete needed
    }

    This is a form of the RAII (Resource Acquisition Is Initialization) C++ idiom, which I plan to post about soon...


  • as a function parameter passed by value: when an invocation to that function occurs, and because of the copy of the auto_ptr performed to build the parameter, the caller relinquishes ownership on the auto_ptr in favor of the callee. The auto_ptr's pointee will be destroyed when the callee returns, unless it also relinquishes ownership within its body. This idiom is called the "auto_ptr sink".


  • as return value of a function: when a function returns an auto_ptr by value, it is relinquishing ownership on it in favor of the caller. This idiom is called the "auto_ptr source".


  • as a class member: this is often used to tie the lifetime of the auto_ptr's pointee with that of the class instance the auto_ptr is member of; when that object is destroyed, so is the auto_ptr and so the pointee.



These uses of auto_ptr enable the programmer to communicate how variables are meant to be used. Let's see some examples:

  • Object creation by factories: factories are meant to encapsulate object creation. They are often implemented this way:


    class MyFactory {
    public:
    MyObject* createMyObject();
    };

    Seeing that code, you cannot be sure whether the caller is responsible or not for the destruction of the just created object. A possible solution would be to drop a comment depicting who the responsible one is, but instead let's use a smart pointer to replace the dumb one:

    class MyFactory {
    public:
    auto_ptr<MyObject> createMyObject();
    };

    This way, using the auto_ptr source idiom, the factory relinquishes ownership of the just created object in favour of the caller, so we are sure about who the one responsible for the object is just looking at the code.


  • Decorator pattern: with such design pattern, an object wraps another one to provide extra features; both them implement the same interface, so the wrapped version is used seamlessly as if it were not wrapped. In the usual implementation of this pattern, the decorator class receives a pointer to the wrapped one in its constructor, stores it as a member variable, and destroys it in its own destructor. As in the previous example, let's use a smart pointer instead of a dumb one to better communicate responsibilities on the wrapped object's lifetime.


    class IStuff {
    public:
    virtual std::string getStuff () = 0;
    };

    class Foo : public IStuff {
    public:
    Foo () { }
    std::string getStuff () {
    return "foo";
    }
    };

    class Decorator : public IStuff {
    public:
    Decorator (std::auto_ptr<IStuff> decorated)
    : decorated_ (decorated) { }
    std::string getStuff () {
    return "decorated " + decorated_->getStuff();
    }
    private:
    std::auto_ptr<IStuff> decorated_;
    };

    int main (void) {
    std::auto_ptr<IStuff> foo (new Foo);
    std::auto_ptr<IStuff> decoratedFoo (new Decorator(foo));

    std::cout << decoratedFoo->getStuff() << std::endl;

    return 0;
    }

    Note how the decorated object is passed inside an auto_ptr to Decorator's constructor, thus we are expressing Decorator is the one responsible for managing it. Also note how the auto_ptr is stored in a member variable, thus tying the lifetime of the decorated object with that of the decorator.


Some final thoughts...

  • A variation in how auto_ptr behaves, arises when it is declared const: a const auto_ptr cannot transfer the ownership of its pointee. In this case, the pointee is tied to that very auto_ptr, no matter what happens. This is a somehow degenerated use of auto_ptr; for this, it is more appropriate to use boost's scoped_ptr.

  • Beware of auto_ptr when dealing with STL containers, they are not auto_ptr friendly because they make internal copies of the data, sometimes maintaining more than one copy at the same time, which goes agains the auto_ptr allowed usage. So just avoid using lists (or vectors, or whatever STL container) of auto_ptr.

  • I intentionally avoided mentioning a member function of auto_ptr called "reset" that enables reusing the auto_ptr for other pointee. I tend not to use it because I think it's quite misleading, as auto_ptr is often used to communicate durable ownership semantics.



Preparing for the future: unique_ptr
Since its inclusion in the C++ standard, auto_ptr has hold much controversy due to both its semantics and its implementation. In the new C++ standard, known as C++0x, auto_ptr is DEPRECATED. It is to be replaced with class unique_ptr, taken from boost, which basically work like auto_ptr but without its deficiencies. To do so, it makes use of rvalue references, one of the goodies C++0x will bring, which enable to express "move semantics".

Saturday 14 February 2009

Koenig lookup...wha?

This is the first piece of C++ code many programmers wrote:


std::operator<<(std::cout, "hello world!").operator<<(std::endl);



Don't believe me? Let's reorganize it a little bit:


std::cout << "Hello world!" << std::endl;


More familiar now, isn't it?

The dark feature that makes it possible for the latter to be equivalent to the former is a feature of C++ language called "Koenig lookup" a.k.a. "Argument Dependent name lookup".

Let's unveil why without it you could not write that piece of code the easy way....

This is the signature for operator<<:


// Extracted from libstdc++-v3, with minor modifications
namespace std {
template<typename _CharT, typename _Traits&
basic_ostream<_CharT, _Traits> &
operator<<(basic_ostream<_CharT, _Traits>& __out, const char* __s);
};


Better without the templates clutter...


namespace std {
ostream& operator<<(ostream&, const char*);
};


You can see it's defined in namespace std. like the other stuff from the STL.

When something is defined inside a namespace, we have to either use "using" directive or prefix it with that namespace and "::", like with cout and endl in this piece of code...


std::cout << "Hello" << std::endl;


But then why don't we need to use any prefix for operator<< ??? Because of Koenig lookup, which states that unqualified calls (i.e. not namespace-prefixed) to functions are also looked up in the namespaces associated to the types of the invocation arguments.

This makes it possible for operators to be called using the classic infix syntax and to avoid annoying namespace prefixes all over the place, without the need of a "using namespace".

Koenig lookup relies on the interface principle, which states that nonmember functions which mention a class and are supplied along with that very class (e.g. are declared in the same header file), are also part -logically speaking- of its interface.

But it can also lead to the opposite, forcing us to prefix a function call even when we already placed a "using" directive or directly being inside in the same namespace the function is declared in, due to the ambiguity it introduces (this is often referred to as Myers' example):


namespace A {
class AClass { };
ostream& operator<<(ostream&, const AClass&);
}

namespace B {
ostream& operator<<(ostream&, const A::AClass&);
void doStuff(A::AClass& a) {
cout << a; // ambiguous, operator<< from A or from B?
}
}


One last curious thing...do you know the type of std::end? it actually is a function template that receives an ostream as argument; it just appends an endline character to its input ostream and then flushes it:


// From libstdc++ v3.
template <typename _CharT, typename _Traits>
inline basic_ostream <_CharT, _Traits>&
endl(basic_ostream <_CharT, _Traits>& __os)
{
return flush(__os.put(__os.widen('\n')));
}


It is possible to append a function template to a series of invocations to << because of class basic_ostream's member operator<<, defined as follows:


// From libstdc++ v3.
__ostream_type&
operator<<(__ostream_type& (*__pf)(__ostream_type&))
{
return __pf(*this);
}


To test it, just use std:endl as the function it actually is:


std::endl( std::operator<<( std::cout, "hello world!" ) );


NOTE: for those historicians out there, just recall Koenig lookup was not there since the beginning; it replaced "friend name injection" which stated that when a class template is instantiated, the names of friend classes and functions are “injected” into the enclosing namespace. g++ still supports this feature with the -ffriend-injection flag.

NOTE: cited pieces of the standard template library have been taken from GNU Standard C++ Library v3.

Monday 26 January 2009

operator->() ad infinitum

Short version:

In C++, if an implementation of operator->() returns something different than a raw pointer, operator->() will be invoked on that return value; this will happen again and again until at some point a raw pointer is returned.


Lil' longer version:

One of the operators C++ allows to be overloaded is "->" . This enables syntactic constructions that mimic pointer usage, like smart pointers, which look like this...


template <typename T>
class MySmartPointer {
public:
MySmartPointer(T* pointee) : pointee_(pointee) {}

T* operator->() { return pointee_; }

/* some magic here... */

private:
T* pointee_;
};


Thus, you can use MySmartPointer with the same syntax as a plain T* :


MyClass* x (new MyClass);
x->doStuff();

MySmartPointer <MyClass> y (new MyClass);
y->doStuff();


However, if we declare operator->() to return something that is not a raw pointer, the compiler will generate code to invoke operator-> on that object, and so on:


struct Baz {
void sayHello() { cout << "hello" <<>() { return baz_; }
};
struct Bar {
Baz* baz_;
Baz* operator->() { return baz_; }
};
struct Foo {
Bar bar_;
Bar operator->() { return bar_; }
};

void doStuff () {
Foo foo;
foo->sayHello();
}



'doStuff' is equivalent to this:

void doStuff () {
Foo foo;
Bar bar (foo.operator->());
Baz* baz = bar.operator->();
baz->sayHello();
}


Bjarne Stroustrup used this language feature -plus a stack allocated auxiliary object- in a proposal for a C++ idiom for wrapping calls to member functions, enabling to perform stuff just before and just after any method call (i.e. intercept it). Let's see it in an example:


template<typename T>
class LockSmartPtrAux {
public:
LockSmartPtrAux(T* t, Lock& lock)
: t_(t), lock_(lock) {
lock_.lock(); /* lock at creation */
}
~LockSmartPtrAux() {
lock_.unlock(); /* unlock at destruction */
}
T* operator->() { return t_; }
private:
T* t_;
Lock& lock_;
};

template<typename T>
class LockSmartPtr {
public:
LockSmartPtr(T* t) : t_(t) {}
LockSmartPtrAux<T> operator->() {
return LockSmartPtrAux<T>(t_, lock_);
}
private:
T* t_;
Lock lock_;
};
// LockSmartPtrAux should be declared inside LockSmartPtr to enhance symbol locality


With the class templates above, you can "wrap" any class so that every invocation to its member functions is guarded by a lock, thus achieving some degree of transparent thread safety:


map<string, MyClass> cache;
/* fill cache with useful and hard to build stuff... */
LockSmartPtr<map<string,MyClass> > threadsafeCache(&cache);
doSomethingInAnotherThread1 (threadsafeCache);
doSomethingInAnotherThread2 (threadsafeCache);


Here follow some more details about the implementation...

  • To just perform some action (e.g. lock_.lock() ) BEFORE the "intercepted" call, you don't need an extra auxiliary object (e.g. LockSmartPtrAux), but only to invoke it before returning the pointee.

  • However, to perform some action (e.g. lock_.unlock() ) just AFTER the intercepted call and before the return, you have to place that action in the destructor of an auxiliary object and use operator-> trick.



This technique can be used to implement a rudimentary form of Aspect Oriented Programming in C++ by expressing cross-cutting concerns in these "wrapper" classes.

That's all the C++ trickery for today, hope you liked it.

NOTE: my implementation of LockSmartPtr assumes your compiler implements the name return value optimization (NRVO); if this does not hold true and you use non recursive locks, you will probably get a nasty deadlock due to temporary copies of LockSmartPtrAux locking an already locked mutex from the same thread. Check out Stroutstrup's paper to see how (managing lock ownership, forbidding assignment) he tackles this and other issues.

Sunday 4 January 2009

Embedding values in C++ pointers

Short version:

Read, compile and run the following piece of C++ source code:

#include <iostream>

class Smi {
public:
static Smi* fromInt(int value) {
return reinterpret_cast<Smi*>(value);
}
int value() {
return reinterpret_cast<int>(this);
}
void sayHello() {
std::cout << "Hello, my value is "
<< this->value() << "."
<< std::endl;
}
};

int main(void) {
Smi* five = Smi::fromInt(5);
Smi* seven = Smi::fromInt(7);
five->sayHello();
seven->sayHello();
return 0;
}


Notice the only state class Smi has is the integer value "embedded" in "this" pointer.

Lil' longer version:

C++ allows you to do pure magic. From time to time I see a piece of C++ code that makes me think: "Does this even compile?". A few days ago I discovered one of those "gems" within V8 (google chrome's javascript engine) source code.

Let's begin with a quiz: what do you think the following code could be for?

reinterpret_cast<int>(this)


Now with some more context...

int Smi::value() {
return reinterpret_cast<int>(this) >> kSmiTagSize;
}


Ummmm..... :|
Well, let's unveil the mystery...

V8 models every representable entity available in javascript (ECMAscript), all deriving from class Object, as comments in file objects.h sugest:

//
// All object types in the V8 JavaScript are described in this file.
//
// Inheritance hierarchy:
// - Object
// - Smi (immediate small integer)
// - Failure (immediate for marking failed operation)
// - HeapObject (superclass for everything allocated in the heap)
// - JSObject
// - JSArray
// - JSRegExp
// - JSFunction
...


Every instance of such entities is allocated and managed by class Heap, V8's runtime memory manager. When Heap is asked to allocate an Object, it returns an Object*, but such pointer carries a hidden surprise, as comments in objects.h depict:


// Formats of Object*:
// Smi: [31 bit signed int] 0
// HeapObject: [32 bit direct pointer] (4 byte aligned) | 01
// Failure: [30 bit signed int] 11


Such comments state three things (apart from the obvious one: what Heap returns as pointers to Object are not such...):
  • the least significant bits of the "pointer" carry a "tag" to indicate the kind of Object.

  • In the case of Smi* and Failure*, the bits remaining are not used to store any kind of pointer, but a numeric value (31 and 30 bits long respectively). This is the way to create an Smi*...

    Smi* Smi::FromInt(int value) {
    ASSERT(Smi::IsValid(value));
    return reinterpret_cast<Smi*>((value << kSmiTagSize) | kSmiTag);
    }

    ...and this is how to retrieve the value...

    int Smi::value() {
    return reinterpret_cast<int>(this) >> kSmiTagSize;
    }

    Thus they avoid the "overhead" of storing the pointer and the pointee, as both are the same.

  • When the "pointer" "points" to a HeapObject instance, the 30 most significant bits carry an actual pointer to a HeapObject that is aligned to 4 bytes, thus the two other bits are always zero, space which is used for the tag. To illustrate this, the following piece of code is the one that, from a true object address, makes up the tagged pointer:

    HeapObject* HeapObject::FromAddress(Address address) {
    ASSERT_TAG_ALIGNED(address);
    return reinterpret_cast(address + kHeapObjectTag);
    }



The trick works as long as you don't try to dereference one of those Object*...

Like some stuff in V8 native code generators I blogged about some time ago, this "tagged" "pointer" trick is not new but can also be found in StrongTalk and SelfVM (respectively Smalltalk an Self virtual machines that share creators with V8 :p)

Hope you enjoyed this curious trick!

My bugs love defensive programming

Short version:
Some forms of defensive programming hide bugs, YOUR bugs.
Other forms enforce design-by-contract style enhancing both quality and maintainability of the software.
Choose wisely...

Lil' longer version:
Of course you know what "defensive programming" is, but perhaps you haven't heard it being called so until now; let's see it in code:


public class NetworkFoo {
/**
* Performs connection.
* @param host Machine where server is running. Must not be null.
* @param port Port server is listening to. Must be in the range 1-65535.
*/
public void connect (String host, int port) {
if (host == null) {
return;
}
if (port < 1 || port > 65535) {
return;
}
...
}
...
}


Sure you have seen -or written- code like that, and that is just a form of defensive programming. The purpose of the techniques named under such term aim at making software robust against unexpected situations.

That sounds quite reasonable, however, many times the programmer does not fully understand the implications of how she chose to address the checks she wanted to be performed. To understand that, let's see a variant of the code above:


public class NetworkFoo {
/**
* Performs connection.
* @param host Machine where server is running. Must not be null.
* @param port Port server is listening to. Must be in the range 1-65535.
*/
public void connect (String host, int port) {
assert host != null;
assert (port > 0 && port <= 65535);
...
}
...
}


Here we have replaced 'if' blocks with assertions, so now instead of silently returning upon malformed parameters, the execution simply stops, as one of the preconditions is not matched.

What is the difference in method 'connect' of these two pieces of code? its preconditions and postconditions are NOT the same in both cases:
  • In the later, preconditions are: 'host' shall not be null and 'port' shall be between 1 and 65535.
  • In the former, although specified in the method header, there are no preconditions regarding parameters, but instead there is an extra implicit postcondition which states that when 'host' is null or 'port' is not between 1 and 65535, the method simply does nothing.

Thus, both ways of implementing such "guards" have different impact in our code and so we should be careful when choosing which approach we use in each case.

As stated at the beginning, defensive programming techniques try to shield code against unexpected [dangerous] situations that might break our logic; the most frequent checks are performed on method parameters or return values after invocations. When choosing how we protect the code, we have to take into account the origin of the "potentially malicious" data:
  • if such data comes directly from user input (or any other untrusted external interface), sure our logic should take into account malformed data,
  • but if we do the same checks on data coming from another method within our own code, we are hiding the fact that we were passed unproper parameters by our very selves!! and thus hiding our own bugs.

And what if we directly remove our "silently return" 'if' blocks and just rely on the caller meeting the preconditions we specified in the method header? If the method is called from our own code, there should be no problem, as long as we always wrote bug-free code, which for sure is not the case, so the behaviour of our method would be undefined for malformed input data, which means a hypothetical invocation to our method with unproper parameters could lead or not to a noticeable effect.

The following table summarizes the consequences of applying or not defensive programming to our methods:






Source of the "potentially dangerous" data
External source
Our own code
No defensive programming at all
Fragile code against malformed input
Undefined behaviour upon caller bugs
Silently return
Robust code against malformed input
Hide our own bugs
Assertions
Our code breaks upon malformed input
Design-by-contract style


[Ask google about design-by-contract to know more about it.]

One common thing is to disable assertions when a system goes into production, my opinion is that if it's tolerable for the system to crash (sporadically) then they should remain enabled, to detect bugs that escaped development and validation phases.

In the development of mission critical systems, it is not uncommon to enforce the 'silently do nothing' flavour of defensive programming in order to avoid [unexpected] crashes. I have no formed opinion about this topic...what is the most acceptable choice in these cases? I suppose a system crash is unacceptable, but is it to enable silent bugs?

Summing up, we can say defensive programming is a useful tool for chasing software quality and maintainability, but we should be wise when choosing the way we "protect" our code to assure we achieve code robustness without hiding our own bugs.

Monday 1 December 2008

Implementation inheritance is like playing russian roulette

Short version:

Implementation inheritance breaks encapsulation and leads to the Fragile Base Class (FBC) problem.

Lil' longer version:

Here is the list of buzzwords I'm going to use along this post: implementation inheritance, method precondition, method postcondition, class invariant, encapsulation, Liskov substitution principle. Let us give some "loose definitions" to them:

  • implementation inheritance is when you use class inheritance as a mechanism for reusing behaviour. This concept is opossed to "interface inheritance" by which a class just inherits "the contract" it has to comply with.
  • method preconditions are the set of conditions that the caller of a method needs to assure before actually calling it (i.e. argument 'x' is greater than 0, argument 'parent' is not null, etc).
  • method postconditions are the result of a call to a method (either as a return value of as its effect over the state of the object it belongs to).
  • class invariant is the set of constraints that express what means for an object of such class to be in a consistent state (i.e. 'width' atribute an instance of class 'Window" must not be less than zero).
  • encapsulation (a.k.a. Information hiding) is the principle by which in a class you "hide" anything (state/behaviour) that is not really needed to be known by other classes, thus reducing their dependence on the mentioned class to a bare minimum. This way, you reduce the probability of needing to change the former ones upon modification on the later.
  • Liskov Substituition principle (LSP) (a.k.a. Design by contract) states that is should be possible to treat an instance of a subclass as if it were a base class object, meaning a subclass must comply with everything that could be expected from its parent.
Now let's ask some questions about these terms so we can get some ideas...

First question: How can the developer "communicate" the preconditions and postconditions of a method?

You can choose among some options to let the reader know the pre/postconditions of a method:
  • Using language facilities: type constraints (e.g. if a parameter must be between 0 and 65535 you can set it to be an unsigned short in C++), assertions, etc.
  • Using comments: method headers usually express in natural language what cannot be expressed using programming language syntax(e.g. /* this class is not thread safe */ ).
  • Not communicate them: either because you don't feel it necessary to communicate them or simply do not know about them (in this case pre/postconditions still exist, they just are implicit in the method body).

Second question: What does LSP actually mean when it says a subclass must comply with everything that could be expected from its parent?

To call a method of an object, you first prepare its arguments as it expects them (preconditions) then perform the call and after that you expect some sort of result of the invocation (postconditions), either returned result or a variation in the object state. If such call were issued to a subclass, the mentioned preconditions and postconditions should still be perfectly valid.


Third question: So what is the relationship between the pre/postconditions and invariant of a class and its subclasses?
  • Preconditions of a subclass should not be stronger than its parent's.
  • Postconditions of a subclass should not be weaker than its parent's.
  • Invariant of a subclass should not be weaker than its parent's.

Fourth question: But that means a subclass is also responsible of maintaining its parent's pre/postconditions and invariant, isn't it?

Yeah, right. That's precisely the point. This is the very reason why implementation inheritance is said to break encapsulation, as the subclass has to know details of the implementation of its base class.


Fifth question: I guess such responsibility can be troublesome upon code changes, am I right?

You are quite right. Every change in the superclass' pre/postconditions or invariant, either explicit (due to changes in method signatures) or implicit (due to changes in the code) forces to verify in each subclass' pre/postconditions and invariant. To perform that verification task properly the coder needs to fully understand the intent of the code of both the base class and the derived one, which is not likely to happen (even if he's the same person who wrote both classes); this is why modifications to the base class often break subclasses. This is known as the "Fragile Base Class problem".


Sixth question: How does this stuff apply to interface inheritance?

With interface inheritance you also have to comply with pre/postconditions of the interface you implement, however in this case there are no implicit conditions within code (as there is no code at all), everything you have to comply with is either expressed with language constructs or as comments in method headers. Upon a change in methods signatures, implementors have also to be changed (otherwise they won't compile).


Seventh question: Your points seem to rely on the correctness of LSP, is it kinda dogma or what?

In Uncle Bob's words: "It is only when derived types are completely substitutable for their base types that functions which use those base types can be reused with impunity, and the derived types can be changed with impunity". Sure you can violate LSP, but not if you want to achieve the code reuse promise from object oriented programming.


Summing up....

Implementation inheritance breaks encapsulation because subclasses have to know every (either explicit or implicit) precondition and postcondition from its parent -plus its invariant- to comply with LSP; this knowledge that must be present in the subclass is likely to be flawed due to unknown (but existing) pre/post conditions, plus the mentioned elements can vary during the lifecycle of the project (e.g. maintenance phase), thus leading to the Fragile Base Class problem, by which modifications to the base class break subclasses.