linux 读写锁
This is the mail archive of the libc-alpha@sourceware.cygnus.commailing list for theglibc project.Index Nav:[Date Index] [Subject Index] [Author Index][Thread Index]Message Nav:
This is the mail archive of the libc-alpha@sourceware.cygnus.commailing list for the glibc project.
Index Nav: | [Date Index] [Subject Index] [Author Index] [Thread Index] | |
---|---|---|
Message Nav: | [Date Prev] [Date Next] | [Thread Prev] [Thread Next] |
Re: Another LinuxThreads bug.
- To: Xavier Leroy <Xavier dot Leroy at inria dot fr>
- Subject: Re: Another LinuxThreads bug.
- From: Kaz Kylheku <kaz at ashi dot footprints dot net>
- Date: Mon, 10 Jan 2000 11:06:22 -0800 (PST)
- cc: libc-alpha at sourceware dot cygnus dot com
On Mon, 10 Jan 2000, Xavier Leroy wrote: > Date: Mon, 10 Jan 2000 13:27:20 +0100 > From: Xavier Leroy <Xavier.Leroy@inria.fr> > To: Kaz Kylheku <kaz@ashi.footprints.net> > Cc: Ulrich Drepper <drepper@cygnus.com>, Andreas Jaeger <aj@suse.de> > Subject: Re: Another LinuxThreads bug. > > > The problem is that the condition ``there is no write lock'' is > > taken as sufficient for placing another read lock. To correctly give > > precedence to writers, the condition should be ``there is no write > > lock, and there are no waiting writers''. > > > > The count of existing read locks need not be consulted at all, even > > if the lock prefers readers. A lock that prefers readers simply > > allows a read lock to be placed whenever there is no write lock, and > > ignores any waiting writers. This is reflected in the new logic, > > which tests the lock's type and breaks the loop. > > Right. The patch looks good. I think we can put it in 2.1.3. Unfortunately, upon reading the Single Unix Specification's description of pthread_rwlock_rdlock, I have uncovered a problem. * * The patch should *not* be applied, because it gives rise to another bug. * Here is the relevant wording: A thread may hold multiple concurrent read locks on rwlock (that is, successfully call the pthread_rwlock_rdlock() function n times). If so, the thread must perform matching unlocks (that is, it must call the pthread_rwlock_unlock() function n times). By making write-priority work correctly, I broke the above requirement, because I had no clue that recursive read locks are permissible. If a thread which holds a read lock tries to acquire another read lock, and now one or more writers is waiting for a write lock, then the algorithm will lead to an obvious deadlock. The reader will be suspended, waiting for the writers to acquire and release the lock, and the writers will be suspended waiting for every existing read lock to be released. Correctly implementing write-priority locks in LinuxThreads in the face of the above requirement doesn't seem easy. The read lock function must distinguish whether the calling thread owns a read lock. Since many threads can own a read lock, and one thread can own read locks on many objects, it appears that be that a structure of size O(M * N) is needed track the ownership. The alternatives are: 1. Abandon support for writer-priority. Make this decision visible to the programmer, so that they aren't led into false expectation. 2. Get it working somehow. This requires a much more sophisticated change than my naive patch which does more harm then good. I suspect that it may be necessary to go with option 1 for release 2.1.3. However read-write locks that avoid writer starvation are nice to have, so research on 2 should begin immediately. Perhaps an elegant solution for this exists already that can be adapted for LinuxThreads. A compromise might be to make read-priority locking default, and guide users to use the non-portable attribute for writer-priority if they want it, on the condition that they can't use recursive read locks (which are brain-damaged anyway!) In that case, the patch *can* be applied, with the additional change that the text in pthread.h which reads: #ifdef __USE_UNIX98 enum { PTHREAD_RWLOCK_PREFER_READER_NP, PTHREAD_RWLOCK_PREFER_WRITER_NP, PTHREAD_RWLOCK_DEFAULT_NP = PTHREAD_RWLOCK_PREFER_WRITER_NP }; #endif /* Unix98 */ is changed to PTHREAD_RWLOCK_DEFAULT_NP = PTHREAD_RWLOCK_PREFER_READER_NP According to The Single Unix Specification, the behavior is unspecified when a reader tries to place a lock, and there is no write lock but writers are waiting. That is, there is no requirement that priority be given to writers. (But the unspecified behavior does not extend to permitting deadlock, I presume!!!). URL: http://www.opengroup.org/onlinepubs/007908799/xsh/pthread_rwlock_rdlock.html
- Follow-Ups:
- Re: Another LinuxThreads bug.
- From: Xavier Leroy
- Re: Another LinuxThreads bug.
Index Nav: | [Date Index] [Subject Index] [Author Index] [Thread Index] | |
---|---|---|
Message Nav: | [Date Prev] [Date Next] | [Thread Prev] [Thread Next] |
=================================================================================================================
|
|
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
|
When creating a rwlock, ask for PTHREAD_RWLOCK_PREFER_WRITER_NP. The resulting behavior is identical to PTHREAD_RWLOCK_PREFER_READER_NP, as can be seen in the source. The following test case shows that as long as there are readers holding the lock, a writer thread will be starved forever. However, if the PTHREAD_RWLOCK_PREFER_WRITER_NONRECURSIVE_NP option is used, the writer thread gets to run. It is not allowed to be recursive, however. ------------------------------------------------------------- #define _XOPEN_SOURCE 600 #include <pthread.h> #include <stdio.h> #include <unistd.h> #include <assert.h> #include <time.h> #include <error.h> #include <string.h> #define NUM_THREADS (250) pthread_rwlock_t lock; void *readfunc(void *arg) { long long id = (long long)arg; while (1) { struct timespec ts = {.tv_sec = 0,.tv_nsec = (id%25 +1)*1000*1000 }; assert(0==pthread_rwlock_rdlock(&lock)); nanosleep(&ts,NULL); assert(0==pthread_rwlock_unlock(&lock)); } } void *writefunc(void *arg) { sleep(1); assert(0==pthread_rwlock_wrlock(&lock)); // assert(0==pthread_rwlock_wrlock(&lock)); //would fail if non-recursive printf("Writer got a chance!\n"); // assert(0==pthread_rwlock_unlock(&lock)); assert(0==pthread_rwlock_unlock(&lock)); return 0; } int main(int argc,char *argv[]) { pthread_t writer,readers[NUM_THREADS]; pthread_rwlockattr_t lockattr; assert(0==pthread_rwlockattr_init(&lockattr)); assert(0==pthread_rwlockattr_setkind_np(&lockattr,PTHREAD_RWLOCK_PREFER_WRITER_NP)); assert(0==pthread_rwlock_init(&lock,&lockattr)); assert(0==pthread_rwlockattr_destroy(&lockattr)); for (long long i=0;i<NUM_THREADS;i++) assert(0==pthread_create(readers+i,NULL,readfunc,(void *)i)); assert(0==pthread_create(&writer,NULL,writefunc,0)); printf("main waits\n"); pthread_join(writer,NULL); return 0; }
And there won't be any implementation.
Uli I think this question deserves a more complete explanation. This seems like a reasonable request. If you disagree then it would not hurt to explain why. There seems to be a valid concern related to reliable implementation of recursive read locks with writer priority. As in this discusion: http://sources.redhat.com/ml/libc-alpha/2000-01/msg00055.html If this your concern then saying so would help resolve/close this issue.
Read/Write locks |
Hello all,
Just a few questions on Read/Write 1) Where can I find documentation, sample code, 2) Can I treat the rwlock stuff same as a mutex 3) What's the story with overhead if you start using 4) If you have many readers could that mean that the Cheers, "A good traveller has no fixed plans Lao Tzu (570-490 BC) |
On Tue, 4 Apr 2000 20:46:41 +0100, Kostas Kostiadis <kko
...@essex.ac.uk> wrote:
These locks are based on The Single Unix Specification.
>Hello all, >Just a few questions on Read/Write >1) Where can I find documentation, sample code, http://www.opengroup.org/onlinepubs/007908799/
>2) Can I treat the rwlock stuff same as a mutex
Something like that.
>in terms of init/destroy/lock/unlock/trylock ??? >I had a look at pthread.h and all the calls look >the same... (Is it basically a mutex that allows >multiple locks for readers?)
>3) What's the story with overhead if you start using
In Linux, there is somewhat more overhead compared to mutexes because the locks
>r/w locks? are more complex. The structures and the operations on them are larger. Also, as of glibc-2.1.3, each thread maintains a linked list of nodes which Each time a read lock is acquired, a linear search of this list is made to see (The lists are actually stacks, so that a recently acquired lock is at the This algorithm is in place in order to implement writer-preference for locks The prior versions of the library purported to implement writer preference,
>4) If you have many readers could that mean that the
Writer preference, subject to the requirements of The Single UNIX Specification
>writer will never get a chance to lock, or are the >locks first-come-first-serve ??? I'm thinking which says that a thread may recursively acquire a read lock unconditionally, even if writers are waiting. In glibc-2.1.3, LinuxThreads supports the non-portable attribute PTHREAD_RWLOCK_PREFER_WRITER_NONRECURSIVE_NP which gives you more efficient writer preference locks, at the cost of
>(I know it's probably dim but...) if a reader can
If you read the spec, you will note that this is implementation defined. An
>always lock, there might be a case where there is >always at least one reader on the mutex. What >happnes if a writer comes along and rwlocks ??? implementation may, but is not required to, support writer preference. The Linux one does (now). -- |
i dont know the whole project but since you're executing a simulation of another solution i can think of is counting the number of operations. A:100 then run robotA, in a loop, robotA decrease his operation bank for every A:-8 same goes for all robots .. A:-8 next turn on "pay day" you give (add) 100 operations to each A:92 now they execute for this ammount of operations. almost every time robots will if one use even more than twice the time he had: A:-230 next turn robotA will still be on a negative balance (-130) so he wont run at that's a bit extreme for an example but even in this situation after a few if there's too much recursion that cause a too great "cheating" of the robots personally i found out this method works quite nicely as long as your -- |
LEGAL C`A`B`L`E TV D`E-S`C`R`A`M`B`L`E`R
Want to watch Sporting Events?--Movies?--Pay-Per-View??....FREE!!!! *This is the Famous R-O Shack TV D-e-s-c-r-a-m-b-l-e-r We Send You: **** PLUS SOMETHING NEW YOU MUST HAVE! **** THE UP-TO-DATE 6 PAGE REPORT: Warning: You should not build a TV D-e-s-c-r-a-m-b-l-e-r You get the complete 6 page report Fill out form below and send it, CABLEFREE-TV (Cash, Check or Money Order.) PRINT YOUR: NAME______________________________________________________ ADDRESS___________________________________________________ CITY/STATE/ZIP____________________________________________ E-MAIl ADDRESS____________________________________________ We are NOT ASSOCIATED in any way with RADIO SHACK. flhwzgldmmuykshxdqmttugbkqtptusrzqruwvykvrlgwdspekvzgrpbigwptqofdferzwmbnem |
Hi,
I'm trying to use pthreads with C++, however whenever I link with the pthread lib, I get segfaults. I'm using redhat 6.1, and g++. The segfault occurs at the first function call (according to gdb). I'm compiling with the command: g++ -g -o appl appl.c -lpthread Any ideas? please respond to smas...@bikerider.com Thanks Sent via Deja.com http://www.deja.com/ |
virtualsmas
...@my-deja.com wrote:
Yop,
> Hi, > Any ideas? please respond to smas...@bikerider.com > Thanks > Sent via Deja.com http://www.deja.com/ perhaps is because you don't use -D_REENTRANT, but if you send the Ooops. |
In article <38EB8012.8331D
...@inria.fr>,
Fabrice Peix <Fabrice.P ...@inria.fr> wrote: Thanks for the reply, I've tracked the problem down a little more, seems something to do with making mutexes part of a class? Threads work as long as I don't create on of the following: class ApplStampHashTree_t
}
gdb gives the following:
GNU gdb 4.18 Program received signal SIGSEGV, Segmentation fault. It's kind of odd that it flags an error there. If you remove the call Thanks Sent via Deja.com http://www.deja.com/ |
Actually, seems that I'm trying tomake too many. HASH_SIZE is 65536.
Any idea if this is a kernel limit, or can I change it? Thanks In article <8cg41s$i2...@nnrp1.deja.com>, Before you buy. |
% In article <38EB8012.8331D
...@inria.fr>,
% Fabrice Peix <Fabrice.P ...@inria.fr> wrote: % Thanks for the reply, I've tracked the problem down a little more, seems That seems a bit unlikely. % Threads work as long as I don't create on of the following: How big is this? It could be that you're blowing up the stack. If you're creating megs of data on the stack (ie, you have a lot % didn't include main.cc becuase it's not the problem. If I comment out Try commenting out the list member, or making HASH_SIZE much smaller Patrick TJ McPhee |
创建论坛 - Google 网上论坛 - Google 主页 - 服务条款 - 隐私权政策 |
©2012 Google |
=================================================
一个读写锁的实现(转自:http://blog.sina.com.cn/s/blog_48d4cf2d0100mx6w.html), 与boost 里的shared_mutex实现原理一样。
#pragma once
#include <boost/thread.hpp>
class read_write_mutex
{
public:
private:
};
class scoped_rlock
{
public:
private:
};
class scoped_wlock
{
public:
private:
};
更多推荐
所有评论(0)