how to implement thread-safe destructor for reference counted smart pointer

Hi friends,

For my project, I am implementing a class which is reference counted.
- The instance of this class is shared by multiple and many threads.
- For this reason, reference count incrementing and decrementing have to be atomic.
- incrementing is not a problem. I lock the mutex, increment and unlock the mutex.
- The problem comes in destructor in case the reference count drops to 0 (or more precisely is about to drop to 0.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
233 CSocket::~CSocket() {
234     Release();
235     csock_t* ps = reinterpret_cast<csock_t*> (_pImpl);
236     assert(ps);
237     
238     pthread_mutex_lock(&(ps->_countMutex));
239     dbgprintf("[Csocket] Destructor Reference count :%d\n",ps->_refCount);
240     if (!ps->_refCount) {
241         pthread_mutex_destroy(&(ps->_countMutex));
242         if (ps->_bSocketCreated) close(ps->_iSockFD);
243         if (ps->_isServiceRunning)
244             pthread_cancel(ps->_thread_id);
245         delete ps;
246     }
247     pthread_mutex_unlock(&(ps->_countMutex));
248 }


In the code pasted above,
- if reference count drops to 0, I have to release all resources allocated for this object.
- since the object is referred by multiple threads, I have synchronize the destruction, so that this is done once and only once.
- My problem is that the mutex which I use for synchronizing, is also part of the data of the object, which also will be destroyed (ref:line#245). After this, the mutex becomes invalid, and the operation becomes blocked forever.
- In this case, how do I lock and unlock the mutex?

So, my question is how to implement a proper thread safe destruction?

Thanks in advance for your kind replies.
regards,
RV
Something like this?

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
Mutex* mutex = your_ptr->mutex;

LockMutex( mutex );

DecRef();

obj* todelete = 0;

if( NeedToDelete() )
{
  todelete = your_ptr;
  your_ptr = NULL;
}

UnlockMutex( mutex );

delete todelete;
Last edited on
Topic archived. No new replies allowed.