I often use tbb's containers (tbb::concurrent_vector, tbb::concurrent_unorderd_map etc).
Here, I know their member functions (push_back, emplace etc) are thread-safe, and they are available regardless of data race.
How about accessing their element,
particularly as to write & read/write operation?
1 2 3 4 5 6 7 8 9
tbb::concurrent_vector v(10);
// Please assume the below scope is done in parallel tasks
{
v[0]=5; // OK?
v[0]+=5; // OK?
//CAUTION: what I meant is whether (1) write (2) read-write operations are thread safe or not
}
If the above is not thread safe,
does concurrent containers just give parallel benefits for (1) its generation (2) read-only containers?
(Of course, when element type is defined as atomic, it works well)
But you have the source code in the header, so perhaps look at the implementation of the [] operator and see if there is any kind of locking operation.
tbb by IBM is all about parallel programming and according to them is designed to overcome the problems associated with raw thread concurrency. So it would seem obvious that a properly designed and implemented parallel programming exercise would be 'automatically' thread safe.
However, just using a tbb vector doesn't guarantee thread safety or more to the point that you even have a parallel-type program.
I understand that there read/write operation is not guaranteed in thread-safe manner.
So, I have tried some locking operation (spin_mutex, queuing_mutex etc),
but as expected, synchronization overhead is non-negligible.
As efficient hardware-level thread-safe read/write operation,
atomic operation is desirable.
But, I noticed that template class of atomic for general class is not permitted
to use "fetch_and_add" as standard.
I really need to use efficient read/write operations for elements in stl/tbb containers.
> againtry
Yes, tbb is made by Intel (its predecessor was developped in MIT), and so well designed to overcome some general problems relating to shared-memory parallelism.
There are a lot of ways to avoid data race (thread local storage, various locking, atomic, and reduce algorithm etc).
However, concurrent containers are often exposed to read/write operations, not only each element, but entire operation and then write (e.g. find if there is an element in the container, if not, insert, if there is, add, etc).
Such a fundamental operation is not supported in standard manners,
and I am looking for some efficient ways.
It depends what define as thread safe. The containe can only protect its internal data.
When you put line 5/6 in two different threads you cannot predict the result. The assignment itself might be thread safe but the consecutive operations or not protected.
When you put line 5/6 in two different threads you cannot predict the result. The assignment itself might be thread safe but the consecutive operations or not protected.
After all, it is not permitted to operate each element in thread-safe (or task-safe) manner as read/write operation (or fetch_and_add).
Thank you so much, and I am going to consider that other ways to maximize efficiency such as no data race arrangement of data flows & structures.