summaryrefslogtreecommitdiff
path: root/db-4.8.30/mutex
diff options
context:
space:
mode:
authorJesse Morgan <jesse@jesterpm.net>2016-12-17 21:28:53 -0800
committerJesse Morgan <jesse@jesterpm.net>2016-12-17 21:28:53 -0800
commit54df2afaa61c6a03cbb4a33c9b90fa572b6d07b8 (patch)
tree18147b92b969d25ffbe61935fb63035cac820dd0 /db-4.8.30/mutex
Berkeley DB 4.8 with rust build script for linux.
Diffstat (limited to 'db-4.8.30/mutex')
-rw-r--r--db-4.8.30/mutex/README110
-rw-r--r--db-4.8.30/mutex/mut_alloc.c237
-rw-r--r--db-4.8.30/mutex/mut_failchk.c69
-rw-r--r--db-4.8.30/mutex/mut_fcntl.c232
-rw-r--r--db-4.8.30/mutex/mut_method.c434
-rw-r--r--db-4.8.30/mutex/mut_pthread.c638
-rw-r--r--db-4.8.30/mutex/mut_region.c407
-rw-r--r--db-4.8.30/mutex/mut_stat.c521
-rw-r--r--db-4.8.30/mutex/mut_stub.c233
-rw-r--r--db-4.8.30/mutex/mut_tas.c560
-rw-r--r--db-4.8.30/mutex/mut_win32.c540
-rw-r--r--db-4.8.30/mutex/test_mutex.c1051
-rw-r--r--db-4.8.30/mutex/uts4_cc.s26
13 files changed, 5058 insertions, 0 deletions
diff --git a/db-4.8.30/mutex/README b/db-4.8.30/mutex/README
new file mode 100644
index 0000000..6e95c5f
--- /dev/null
+++ b/db-4.8.30/mutex/README
@@ -0,0 +1,110 @@
+# $Id$
+
+Note: this only applies to locking using test-and-set and fcntl calls,
+pthreads were added after this was written.
+
+Resource locking routines: lock based on a DB_MUTEX. All this gunk
+(including trying to make assembly code portable), is necessary because
+System V semaphores require system calls for uncontested locks and we
+don't want to make two system calls per resource lock.
+
+First, this is how it works. The DB_MUTEX structure contains a resource
+test-and-set lock (tsl), a file offset, a pid for debugging and statistics
+information.
+
+If HAVE_MUTEX_FCNTL is NOT defined (that is, we know how to do
+test-and-sets for this compiler/architecture combination), we try and
+lock the resource tsl some number of times (based on the number of
+processors). If we can't acquire the mutex that way, we use a system
+call to sleep for 1ms, 2ms, 4ms, etc. (The time is bounded at 10ms for
+mutexes backing logical locks and 25 ms for data structures, just in
+case.) Using the timer backoff means that there are two assumptions:
+that mutexes are held for brief periods (never over system calls or I/O)
+and mutexes are not hotly contested.
+
+If HAVE_MUTEX_FCNTL is defined, we use a file descriptor to do byte
+locking on a file at a specified offset. In this case, ALL of the
+locking is done in the kernel. Because file descriptors are allocated
+per process, we have to provide the file descriptor as part of the lock
+call. We still have to do timer backoff because we need to be able to
+block ourselves, that is, the lock manager causes processes to wait by
+having the process acquire a mutex and then attempting to re-acquire the
+mutex. There's no way to use kernel locking to block yourself, that is,
+if you hold a lock and attempt to re-acquire it, the attempt will
+succeed.
+
+Next, let's talk about why it doesn't work the way a reasonable person
+would think it should work.
+
+Ideally, we'd have the ability to try to lock the resource tsl, and if
+that fails, increment a counter of waiting processes, then block in the
+kernel until the tsl is released. The process holding the resource tsl
+would see the wait counter when it went to release the resource tsl, and
+would wake any waiting processes up after releasing the lock. This would
+actually require both another tsl (call it the mutex tsl) and
+synchronization between the call that blocks in the kernel and the actual
+resource tsl. The mutex tsl would be used to protect accesses to the
+DB_MUTEX itself. Locking the mutex tsl would be done by a busy loop,
+which is safe because processes would never block holding that tsl (all
+they would do is try to obtain the resource tsl and set/check the wait
+count). The problem in this model is that the blocking call into the
+kernel requires a blocking semaphore, i.e. one whose normal state is
+locked.
+
+The only portable forms of locking under UNIX are fcntl(2) on a file
+descriptor/offset, and System V semaphores. Neither of these locking
+methods are sufficient to solve the problem.
+
+The problem with fcntl locking is that only the process that obtained the
+lock can release it. Remember, we want the normal state of the kernel
+semaphore to be locked. So, if the creator of the DB_MUTEX were to
+initialize the lock to "locked", then a second process locks the resource
+tsl, and then a third process needs to block, waiting for the resource
+tsl, when the second process wants to wake up the third process, it can't
+because it's not the holder of the lock! For the second process to be
+the holder of the lock, we would have to make a system call per
+uncontested lock, which is what we were trying to get away from in the
+first place.
+
+There are some hybrid schemes, such as signaling the holder of the lock,
+or using a different blocking offset depending on which process is
+holding the lock, but it gets complicated fairly quickly. I'm open to
+suggestions, but I'm not holding my breath.
+
+Regardless, we use this form of locking when we don't have any other
+choice, because it doesn't have the limitations found in System V
+semaphores, and because the normal state of the kernel object in that
+case is unlocked, so the process releasing the lock is also the holder
+of the lock.
+
+The System V semaphore design has a number of other limitations that make
+it inappropriate for this task. Namely:
+
+First, the semaphore key name space is separate from the file system name
+space (although there exist methods for using file names to create
+semaphore keys). If we use a well-known key, there's no reason to believe
+that any particular key will not already be in use, either by another
+instance of the DB application or some other application, in which case
+the DB application will fail. If we create a key, then we have to use a
+file system name to rendezvous and pass around the key.
+
+Second, System V semaphores traditionally have compile-time, system-wide
+limits on the number of semaphore keys that you can have. Typically, that
+number is far too low for any practical purpose. Since the semaphores
+permit more than a single slot per semaphore key, we could try and get
+around that limit by using multiple slots, but that means that the file
+that we're using for rendezvous is going to have to contain slot
+information as well as semaphore key information, and we're going to be
+reading/writing it on every db_mutex_t init or destroy operation. Anyhow,
+similar compile-time, system-wide limits on the numbers of slots per
+semaphore key kick in, and you're right back where you started.
+
+My fantasy is that once POSIX.1 standard mutexes are in wide-spread use,
+we can switch to them. My guess is that it won't happen, because the
+POSIX semaphores are only required to work for threads within a process,
+and not independent processes.
+
+Note: there are races in the statistics code, but since it's just that,
+I didn't bother fixing them. (The fix requires a mutex tsl, so, when/if
+this code is fixed to do rational locking (see above), then change the
+statistics update code to acquire/release the mutex tsl.
diff --git a/db-4.8.30/mutex/mut_alloc.c b/db-4.8.30/mutex/mut_alloc.c
new file mode 100644
index 0000000..c25e3a2
--- /dev/null
+++ b/db-4.8.30/mutex/mut_alloc.c
@@ -0,0 +1,237 @@
+/*-
+ * See the file LICENSE for redistribution information.
+ *
+ * Copyright (c) 1999-2009 Oracle. All rights reserved.
+ *
+ * $Id$
+ */
+
+#include "db_config.h"
+
+#include "db_int.h"
+
+/*
+ * __mutex_alloc --
+ * Allocate a mutex from the mutex region.
+ *
+ * PUBLIC: int __mutex_alloc __P((ENV *, int, u_int32_t, db_mutex_t *));
+ */
+int
+__mutex_alloc(env, alloc_id, flags, indxp)
+ ENV *env;
+ int alloc_id;
+ u_int32_t flags;
+ db_mutex_t *indxp;
+{
+ int ret;
+
+ /* The caller may depend on us to initialize. */
+ *indxp = MUTEX_INVALID;
+
+ /*
+ * If this is not an application lock, and we've turned off locking,
+ * or the ENV handle isn't thread-safe, and this is a thread lock
+ * or the environment isn't multi-process by definition, there's no
+ * need to mutex at all.
+ */
+ if (alloc_id != MTX_APPLICATION &&
+ (F_ISSET(env->dbenv, DB_ENV_NOLOCKING) ||
+ (!F_ISSET(env, ENV_THREAD) &&
+ (LF_ISSET(DB_MUTEX_PROCESS_ONLY) ||
+ F_ISSET(env, ENV_PRIVATE)))))
+ return (0);
+
+ /* Private environments never share mutexes. */
+ if (F_ISSET(env, ENV_PRIVATE))
+ LF_SET(DB_MUTEX_PROCESS_ONLY);
+
+ /*
+ * If we have a region in which to allocate the mutexes, lock it and
+ * do the allocation.
+ */
+ if (MUTEX_ON(env))
+ return (__mutex_alloc_int(env, 1, alloc_id, flags, indxp));
+
+ /*
+ * We have to allocate some number of mutexes before we have a region
+ * in which to allocate them. We handle this by saving up the list of
+ * flags and allocating them as soon as we have a handle.
+ *
+ * The list of mutexes to alloc is maintained in pairs: first the
+ * alloc_id argument, second the flags passed in by the caller.
+ */
+ if (env->mutex_iq == NULL) {
+ env->mutex_iq_max = 50;
+ if ((ret = __os_calloc(env, env->mutex_iq_max,
+ sizeof(env->mutex_iq[0]), &env->mutex_iq)) != 0)
+ return (ret);
+ } else if (env->mutex_iq_next == env->mutex_iq_max - 1) {
+ env->mutex_iq_max *= 2;
+ if ((ret = __os_realloc(env,
+ env->mutex_iq_max * sizeof(env->mutex_iq[0]),
+ &env->mutex_iq)) != 0)
+ return (ret);
+ }
+ *indxp = env->mutex_iq_next + 1; /* Correct for MUTEX_INVALID. */
+ env->mutex_iq[env->mutex_iq_next].alloc_id = alloc_id;
+ env->mutex_iq[env->mutex_iq_next].flags = flags;
+ ++env->mutex_iq_next;
+
+ return (0);
+}
+
+/*
+ * __mutex_alloc_int --
+ * Internal routine to allocate a mutex.
+ *
+ * PUBLIC: int __mutex_alloc_int
+ * PUBLIC: __P((ENV *, int, int, u_int32_t, db_mutex_t *));
+ */
+int
+__mutex_alloc_int(env, locksys, alloc_id, flags, indxp)
+ ENV *env;
+ int locksys, alloc_id;
+ u_int32_t flags;
+ db_mutex_t *indxp;
+{
+ DB_ENV *dbenv;
+ DB_MUTEX *mutexp;
+ DB_MUTEXMGR *mtxmgr;
+ DB_MUTEXREGION *mtxregion;
+ int ret;
+
+ dbenv = env->dbenv;
+ mtxmgr = env->mutex_handle;
+ mtxregion = mtxmgr->reginfo.primary;
+ ret = 0;
+
+ /*
+ * If we're not initializing the mutex region, then lock the region to
+ * allocate new mutexes. Drop the lock before initializing the mutex,
+ * mutex initialization may require a system call.
+ */
+ if (locksys)
+ MUTEX_SYSTEM_LOCK(env);
+
+ if (mtxregion->mutex_next == MUTEX_INVALID) {
+ __db_errx(env,
+ "unable to allocate memory for mutex; resize mutex region");
+ if (locksys)
+ MUTEX_SYSTEM_UNLOCK(env);
+ return (ENOMEM);
+ }
+
+ *indxp = mtxregion->mutex_next;
+ mutexp = MUTEXP_SET(mtxmgr, *indxp);
+ DB_ASSERT(env,
+ ((uintptr_t)mutexp & (dbenv->mutex_align - 1)) == 0);
+ mtxregion->mutex_next = mutexp->mutex_next_link;
+
+ --mtxregion->stat.st_mutex_free;
+ ++mtxregion->stat.st_mutex_inuse;
+ if (mtxregion->stat.st_mutex_inuse > mtxregion->stat.st_mutex_inuse_max)
+ mtxregion->stat.st_mutex_inuse_max =
+ mtxregion->stat.st_mutex_inuse;
+ if (locksys)
+ MUTEX_SYSTEM_UNLOCK(env);
+
+ /* Initialize the mutex. */
+ memset(mutexp, 0, sizeof(*mutexp));
+ F_SET(mutexp, DB_MUTEX_ALLOCATED |
+ LF_ISSET(DB_MUTEX_LOGICAL_LOCK |
+ DB_MUTEX_PROCESS_ONLY | DB_MUTEX_SHARED));
+
+ /*
+ * If the mutex is associated with a single process, set the process
+ * ID. If the application ever calls DbEnv::failchk, we'll need the
+ * process ID to know if the mutex is still in use.
+ */
+ if (LF_ISSET(DB_MUTEX_PROCESS_ONLY))
+ dbenv->thread_id(dbenv, &mutexp->pid, NULL);
+
+#ifdef HAVE_STATISTICS
+ mutexp->alloc_id = alloc_id;
+#else
+ COMPQUIET(alloc_id, 0);
+#endif
+
+ if ((ret = __mutex_init(env, *indxp, flags)) != 0)
+ (void)__mutex_free_int(env, locksys, indxp);
+
+ return (ret);
+}
+
+/*
+ * __mutex_free --
+ * Free a mutex.
+ *
+ * PUBLIC: int __mutex_free __P((ENV *, db_mutex_t *));
+ */
+int
+__mutex_free(env, indxp)
+ ENV *env;
+ db_mutex_t *indxp;
+{
+ /*
+ * There is no explicit ordering in how the regions are cleaned up
+ * up and/or discarded when an environment is destroyed (either a
+ * private environment is closed or a public environment is removed).
+ * The way we deal with mutexes is to clean up all remaining mutexes
+ * when we close the mutex environment (because we have to be able to
+ * do that anyway, after a crash), which means we don't have to deal
+ * with region cleanup ordering on normal environment destruction.
+ * All that said, what it really means is we can get here without a
+ * mpool region. It's OK, the mutex has been, or will be, destroyed.
+ *
+ * If the mutex has never been configured, we're done.
+ */
+ if (!MUTEX_ON(env) || *indxp == MUTEX_INVALID)
+ return (0);
+
+ return (__mutex_free_int(env, 1, indxp));
+}
+
+/*
+ * __mutex_free_int --
+ * Internal routine to free a mutex.
+ *
+ * PUBLIC: int __mutex_free_int __P((ENV *, int, db_mutex_t *));
+ */
+int
+__mutex_free_int(env, locksys, indxp)
+ ENV *env;
+ int locksys;
+ db_mutex_t *indxp;
+{
+ DB_MUTEX *mutexp;
+ DB_MUTEXMGR *mtxmgr;
+ DB_MUTEXREGION *mtxregion;
+ db_mutex_t mutex;
+ int ret;
+
+ mutex = *indxp;
+ *indxp = MUTEX_INVALID;
+
+ mtxmgr = env->mutex_handle;
+ mtxregion = mtxmgr->reginfo.primary;
+ mutexp = MUTEXP_SET(mtxmgr, mutex);
+
+ DB_ASSERT(env, F_ISSET(mutexp, DB_MUTEX_ALLOCATED));
+ F_CLR(mutexp, DB_MUTEX_ALLOCATED);
+
+ ret = __mutex_destroy(env, mutex);
+
+ if (locksys)
+ MUTEX_SYSTEM_LOCK(env);
+
+ /* Link the mutex on the head of the free list. */
+ mutexp->mutex_next_link = mtxregion->mutex_next;
+ mtxregion->mutex_next = mutex;
+ ++mtxregion->stat.st_mutex_free;
+ --mtxregion->stat.st_mutex_inuse;
+
+ if (locksys)
+ MUTEX_SYSTEM_UNLOCK(env);
+
+ return (ret);
+}
diff --git a/db-4.8.30/mutex/mut_failchk.c b/db-4.8.30/mutex/mut_failchk.c
new file mode 100644
index 0000000..6fbebde
--- /dev/null
+++ b/db-4.8.30/mutex/mut_failchk.c
@@ -0,0 +1,69 @@
+/*-
+ * See the file LICENSE for redistribution information.
+ *
+ * Copyright (c) 2005-2009 Oracle. All rights reserved.
+ *
+ * $Id$
+ */
+
+#include "db_config.h"
+
+#include "db_int.h"
+
+/*
+ * __mut_failchk --
+ * Check for mutexes held by dead processes.
+ *
+ * PUBLIC: int __mut_failchk __P((ENV *));
+ */
+int
+__mut_failchk(env)
+ ENV *env;
+{
+ DB_ENV *dbenv;
+ DB_MUTEX *mutexp;
+ DB_MUTEXMGR *mtxmgr;
+ DB_MUTEXREGION *mtxregion;
+ db_mutex_t i;
+ int ret;
+ char buf[DB_THREADID_STRLEN];
+
+ dbenv = env->dbenv;
+ mtxmgr = env->mutex_handle;
+ mtxregion = mtxmgr->reginfo.primary;
+ ret = 0;
+
+ MUTEX_SYSTEM_LOCK(env);
+ for (i = 1; i <= mtxregion->stat.st_mutex_cnt; ++i, ++mutexp) {
+ mutexp = MUTEXP_SET(mtxmgr, i);
+
+ /*
+ * We're looking for per-process mutexes where the process
+ * has died.
+ */
+ if (!F_ISSET(mutexp, DB_MUTEX_ALLOCATED) ||
+ !F_ISSET(mutexp, DB_MUTEX_PROCESS_ONLY))
+ continue;
+
+ /*
+ * The thread that allocated the mutex may have exited, but
+ * we cannot reclaim the mutex if the process is still alive.
+ */
+ if (dbenv->is_alive(
+ dbenv, mutexp->pid, 0, DB_MUTEX_PROCESS_ONLY))
+ continue;
+
+ __db_msg(env, "Freeing mutex for process: %s",
+ dbenv->thread_id_string(dbenv, mutexp->pid, 0, buf));
+
+ /* Unlock and free the mutex. */
+ if (F_ISSET(mutexp, DB_MUTEX_LOCKED))
+ MUTEX_UNLOCK(env, i);
+
+ if ((ret = __mutex_free_int(env, 0, &i)) != 0)
+ break;
+ }
+ MUTEX_SYSTEM_UNLOCK(env);
+
+ return (ret);
+}
diff --git a/db-4.8.30/mutex/mut_fcntl.c b/db-4.8.30/mutex/mut_fcntl.c
new file mode 100644
index 0000000..d1b896f
--- /dev/null
+++ b/db-4.8.30/mutex/mut_fcntl.c
@@ -0,0 +1,232 @@
+/*-
+ * See the file LICENSE for redistribution information.
+ *
+ * Copyright (c) 1996-2009 Oracle. All rights reserved.
+ *
+ * $Id$
+ */
+
+#include "db_config.h"
+
+#include "db_int.h"
+
+static inline int __db_fcntl_mutex_lock_int __P((ENV *, db_mutex_t, int));
+
+/*
+ * __db_fcntl_mutex_init --
+ * Initialize a fcntl mutex.
+ *
+ * PUBLIC: int __db_fcntl_mutex_init __P((ENV *, db_mutex_t, u_int32_t));
+ */
+int
+__db_fcntl_mutex_init(env, mutex, flags)
+ ENV *env;
+ db_mutex_t mutex;
+ u_int32_t flags;
+{
+ COMPQUIET(env, NULL);
+ COMPQUIET(mutex, MUTEX_INVALID);
+ COMPQUIET(flags, 0);
+
+ return (0);
+}
+
+/*
+ * __db_fcntl_mutex_lock_int
+ * Internal function to lock a mutex, blocking only when requested
+ */
+inline int
+__db_fcntl_mutex_lock_int(env, mutex, wait)
+ ENV *env;
+ db_mutex_t mutex;
+ int wait;
+{
+ DB_ENV *dbenv;
+ DB_MUTEX *mutexp;
+ DB_MUTEXMGR *mtxmgr;
+ DB_THREAD_INFO *ip;
+ struct flock k_lock;
+ int locked, ms, ret;
+
+ dbenv = env->dbenv;
+
+ if (!MUTEX_ON(env) || F_ISSET(dbenv, DB_ENV_NOLOCKING))
+ return (0);
+
+ mtxmgr = env->mutex_handle;
+ mutexp = MUTEXP_SET(mtxmgr, mutex);
+
+ CHECK_MTX_THREAD(env, mutexp);
+
+#ifdef HAVE_STATISTICS
+ if (F_ISSET(mutexp, DB_MUTEX_LOCKED))
+ ++mutexp->mutex_set_wait;
+ else
+ ++mutexp->mutex_set_nowait;
+#endif
+
+ /* Initialize the lock. */
+ k_lock.l_whence = SEEK_SET;
+ k_lock.l_start = mutex;
+ k_lock.l_len = 1;
+
+ /*
+ * Only check the thread state once, by initializing the thread
+ * control block pointer to null. If it is not the failchk
+ * thread, then ip will have a valid value subsequent times
+ * in the loop.
+ */
+ ip = NULL;
+
+ for (locked = 0;;) {
+ /*
+ * Wait for the lock to become available; wait 1ms initially,
+ * up to 1 second.
+ */
+ for (ms = 1; F_ISSET(mutexp, DB_MUTEX_LOCKED);) {
+ if (F_ISSET(dbenv, DB_ENV_FAILCHK) &&
+ ip == NULL && dbenv->is_alive(dbenv,
+ mutexp->pid, mutexp->tid, 0) == 0) {
+ ret = __env_set_state(env, &ip, THREAD_VERIFY);
+ if (ret != 0 ||
+ ip->dbth_state == THREAD_FAILCHK)
+ return (DB_RUNRECOVERY);
+ }
+ if (!wait)
+ return (DB_LOCK_NOTGRANTED);
+ __os_yield(NULL, 0, ms * US_PER_MS);
+ if ((ms <<= 1) > MS_PER_SEC)
+ ms = MS_PER_SEC;
+ }
+
+ /* Acquire an exclusive kernel lock on the byte. */
+ k_lock.l_type = F_WRLCK;
+ if (fcntl(env->lockfhp->fd, F_SETLKW, &k_lock))
+ goto err;
+
+ /* If the resource is still available, it's ours. */
+ if (!F_ISSET(mutexp, DB_MUTEX_LOCKED)) {
+ locked = 1;
+
+ F_SET(mutexp, DB_MUTEX_LOCKED);
+ dbenv->thread_id(dbenv, &mutexp->pid, &mutexp->tid);
+ }
+
+ /* Release the kernel lock. */
+ k_lock.l_type = F_UNLCK;
+ if (fcntl(env->lockfhp->fd, F_SETLK, &k_lock))
+ goto err;
+
+ /*
+ * If we got the resource lock we're done.
+ *
+ * !!!
+ * We can't check to see if the lock is ours, because we may
+ * be trying to block ourselves in the lock manager, and so
+ * the holder of the lock that's preventing us from getting
+ * the lock may be us! (Seriously.)
+ */
+ if (locked)
+ break;
+ }
+
+#ifdef DIAGNOSTIC
+ /*
+ * We want to switch threads as often as possible. Yield every time
+ * we get a mutex to ensure contention.
+ */
+ if (F_ISSET(dbenv, DB_ENV_YIELDCPU))
+ __os_yield(env, 0, 0);
+#endif
+ return (0);
+
+err: ret = __os_get_syserr();
+ __db_syserr(env, ret, "fcntl lock failed");
+ return (__env_panic(env, __os_posix_err(ret)));
+}
+
+/*
+ * __db_fcntl_mutex_lock
+ * Lock a mutex, blocking if necessary.
+ *
+ * PUBLIC: int __db_fcntl_mutex_lock __P((ENV *, db_mutex_t));
+ */
+int
+__db_fcntl_mutex_lock(env, mutex)
+ ENV *env;
+ db_mutex_t mutex;
+{
+ return (__db_fcntl_mutex_lock_int(env, mutex, 1));
+}
+
+/*
+ * __db_fcntl_mutex_trylock
+ * Try to lock a mutex, without blocking when it is busy.
+ *
+ * PUBLIC: int __db_fcntl_mutex_trylock __P((ENV *, db_mutex_t));
+ */
+int
+__db_fcntl_mutex_trylock(env, mutex)
+ ENV *env;
+ db_mutex_t mutex;
+{
+ return (__db_fcntl_mutex_lock_int(env, mutex, 0));
+}
+
+/*
+ * __db_fcntl_mutex_unlock --
+ * Release a mutex.
+ *
+ * PUBLIC: int __db_fcntl_mutex_unlock __P((ENV *, db_mutex_t));
+ */
+int
+__db_fcntl_mutex_unlock(env, mutex)
+ ENV *env;
+ db_mutex_t mutex;
+{
+ DB_ENV *dbenv;
+ DB_MUTEX *mutexp;
+ DB_MUTEXMGR *mtxmgr;
+
+ dbenv = env->dbenv;
+
+ if (!MUTEX_ON(env) || F_ISSET(dbenv, DB_ENV_NOLOCKING))
+ return (0);
+
+ mtxmgr = env->mutex_handle;
+ mutexp = MUTEXP_SET(mtxmgr, mutex);
+
+#ifdef DIAGNOSTIC
+ if (!F_ISSET(mutexp, DB_MUTEX_LOCKED)) {
+ __db_errx(env, "fcntl unlock failed: lock already unlocked");
+ return (__env_panic(env, EACCES));
+ }
+#endif
+
+ /*
+ * Release the resource. We don't have to acquire any locks because
+ * processes trying to acquire the lock are waiting for the flag to
+ * go to 0. Once that happens the waiters will serialize acquiring
+ * an exclusive kernel lock before locking the mutex.
+ */
+ F_CLR(mutexp, DB_MUTEX_LOCKED);
+
+ return (0);
+}
+
+/*
+ * __db_fcntl_mutex_destroy --
+ * Destroy a mutex.
+ *
+ * PUBLIC: int __db_fcntl_mutex_destroy __P((ENV *, db_mutex_t));
+ */
+int
+__db_fcntl_mutex_destroy(env, mutex)
+ ENV *env;
+ db_mutex_t mutex;
+{
+ COMPQUIET(env, NULL);
+ COMPQUIET(mutex, MUTEX_INVALID);
+
+ return (0);
+}
diff --git a/db-4.8.30/mutex/mut_method.c b/db-4.8.30/mutex/mut_method.c
new file mode 100644
index 0000000..2588763
--- /dev/null
+++ b/db-4.8.30/mutex/mut_method.c
@@ -0,0 +1,434 @@
+/*-
+ * See the file LICENSE for redistribution information.
+ *
+ * Copyright (c) 1996-2009 Oracle. All rights reserved.
+ *
+ * $Id$
+ */
+
+#include "db_config.h"
+
+#include "db_int.h"
+
+/*
+ * __mutex_alloc_pp --
+ * Allocate a mutex, application method.
+ *
+ * PUBLIC: int __mutex_alloc_pp __P((DB_ENV *, u_int32_t, db_mutex_t *));
+ */
+int
+__mutex_alloc_pp(dbenv, flags, indxp)
+ DB_ENV *dbenv;
+ u_int32_t flags;
+ db_mutex_t *indxp;
+{
+ DB_THREAD_INFO *ip;
+ ENV *env;
+ int ret;
+
+ env = dbenv->env;
+
+ if ((ret = __db_fchk(env, "DB_ENV->mutex_alloc",
+ flags, DB_MUTEX_PROCESS_ONLY | DB_MUTEX_SELF_BLOCK)) != 0)
+ return (ret);
+
+ ENV_ENTER(env, ip);
+ ret = __mutex_alloc(env, MTX_APPLICATION, flags, indxp);
+ ENV_LEAVE(env, ip);
+
+ return (ret);
+}
+
+/*
+ * __mutex_free_pp --
+ * Destroy a mutex, application method.
+ *
+ * PUBLIC: int __mutex_free_pp __P((DB_ENV *, db_mutex_t));
+ */
+int
+__mutex_free_pp(dbenv, indx)
+ DB_ENV *dbenv;
+ db_mutex_t indx;
+{
+ DB_THREAD_INFO *ip;
+ ENV *env;
+ int ret;
+
+ env = dbenv->env;
+
+ if (indx == MUTEX_INVALID)
+ return (EINVAL);
+
+ /*
+ * Internally Berkeley DB passes around the db_mutex_t address on
+ * free, because we want to make absolutely sure the slot gets
+ * overwritten with MUTEX_INVALID. We don't export MUTEX_INVALID,
+ * so we don't export that part of the API, either.
+ */
+ ENV_ENTER(env, ip);
+ ret = __mutex_free(env, &indx);
+ ENV_LEAVE(env, ip);
+
+ return (ret);
+}
+
+/*
+ * __mutex_lock --
+ * Lock a mutex, application method.
+ *
+ * PUBLIC: int __mutex_lock_pp __P((DB_ENV *, db_mutex_t));
+ */
+int
+__mutex_lock_pp(dbenv, indx)
+ DB_ENV *dbenv;
+ db_mutex_t indx;
+{
+ DB_THREAD_INFO *ip;
+ ENV *env;
+ int ret;
+
+ env = dbenv->env;
+
+ if (indx == MUTEX_INVALID)
+ return (EINVAL);
+
+ ENV_ENTER(env, ip);
+ ret = __mutex_lock(env, indx);
+ ENV_LEAVE(env, ip);
+ return (ret);
+}
+
+/*
+ * __mutex_unlock --
+ * Unlock a mutex, application method.
+ *
+ * PUBLIC: int __mutex_unlock_pp __P((DB_ENV *, db_mutex_t));
+ */
+int
+__mutex_unlock_pp(dbenv, indx)
+ DB_ENV *dbenv;
+ db_mutex_t indx;
+{
+ DB_THREAD_INFO *ip;
+ ENV *env;
+ int ret;
+
+ env = dbenv->env;
+
+ if (indx == MUTEX_INVALID)
+ return (EINVAL);
+
+ ENV_ENTER(env, ip);
+ ret = __mutex_unlock(env, indx);
+ ENV_LEAVE(env, ip);
+ return (ret);
+}
+
+/*
+ * __mutex_get_align --
+ * DB_ENV->mutex_get_align.
+ *
+ * PUBLIC: int __mutex_get_align __P((DB_ENV *, u_int32_t *));
+ */
+int
+__mutex_get_align(dbenv, alignp)
+ DB_ENV *dbenv;
+ u_int32_t *alignp;
+{
+ ENV *env;
+
+ env = dbenv->env;
+
+ if (MUTEX_ON(env)) {
+ /* Cannot be set after open, no lock required to read. */
+ *alignp = ((DB_MUTEXREGION *)
+ env->mutex_handle->reginfo.primary)->stat.st_mutex_align;
+ } else
+ *alignp = dbenv->mutex_align;
+ return (0);
+}
+
+/*
+ * __mutex_set_align --
+ * DB_ENV->mutex_set_align.
+ *
+ * PUBLIC: int __mutex_set_align __P((DB_ENV *, u_int32_t));
+ */
+int
+__mutex_set_align(dbenv, align)
+ DB_ENV *dbenv;
+ u_int32_t align;
+{
+ ENV *env;
+
+ env = dbenv->env;
+
+ ENV_ILLEGAL_AFTER_OPEN(env, "DB_ENV->set_mutex_align");
+
+ if (align == 0 || !POWER_OF_TWO(align)) {
+ __db_errx(env,
+ "DB_ENV->mutex_set_align: alignment value must be a non-zero power-of-two");
+ return (EINVAL);
+ }
+
+ dbenv->mutex_align = align;
+ return (0);
+}
+
+/*
+ * __mutex_get_increment --
+ * DB_ENV->mutex_get_increment.
+ *
+ * PUBLIC: int __mutex_get_increment __P((DB_ENV *, u_int32_t *));
+ */
+int
+__mutex_get_increment(dbenv, incrementp)
+ DB_ENV *dbenv;
+ u_int32_t *incrementp;
+{
+ /*
+ * We don't maintain the increment in the region (it just makes
+ * no sense). Return whatever we have configured on this handle,
+ * nobody is ever going to notice.
+ */
+ *incrementp = dbenv->mutex_inc;
+ return (0);
+}
+
+/*
+ * __mutex_set_increment --
+ * DB_ENV->mutex_set_increment.
+ *
+ * PUBLIC: int __mutex_set_increment __P((DB_ENV *, u_int32_t));
+ */
+int
+__mutex_set_increment(dbenv, increment)
+ DB_ENV *dbenv;
+ u_int32_t increment;
+{
+ ENV *env;
+
+ env = dbenv->env;
+
+ ENV_ILLEGAL_AFTER_OPEN(env, "DB_ENV->set_mutex_increment");
+
+ dbenv->mutex_cnt = 0;
+ dbenv->mutex_inc = increment;
+ return (0);
+}
+
+/*
+ * __mutex_get_max --
+ * DB_ENV->mutex_get_max.
+ *
+ * PUBLIC: int __mutex_get_max __P((DB_ENV *, u_int32_t *));
+ */
+int
+__mutex_get_max(dbenv, maxp)
+ DB_ENV *dbenv;
+ u_int32_t *maxp;
+{
+ ENV *env;
+
+ env = dbenv->env;
+
+ if (MUTEX_ON(env)) {
+ /* Cannot be set after open, no lock required to read. */
+ *maxp = ((DB_MUTEXREGION *)
+ env->mutex_handle->reginfo.primary)->stat.st_mutex_cnt;
+ } else
+ *maxp = dbenv->mutex_cnt;
+ return (0);
+}
+
+/*
+ * __mutex_set_max --
+ * DB_ENV->mutex_set_max.
+ *
+ * PUBLIC: int __mutex_set_max __P((DB_ENV *, u_int32_t));
+ */
+int
+__mutex_set_max(dbenv, max)
+ DB_ENV *dbenv;
+ u_int32_t max;
+{
+ ENV *env;
+
+ env = dbenv->env;
+
+ ENV_ILLEGAL_AFTER_OPEN(env, "DB_ENV->set_mutex_max");
+
+ dbenv->mutex_cnt = max;
+ dbenv->mutex_inc = 0;
+ return (0);
+}
+
+/*
+ * __mutex_get_tas_spins --
+ * DB_ENV->mutex_get_tas_spins.
+ *
+ * PUBLIC: int __mutex_get_tas_spins __P((DB_ENV *, u_int32_t *));
+ */
+int
+__mutex_get_tas_spins(dbenv, tas_spinsp)
+ DB_ENV *dbenv;
+ u_int32_t *tas_spinsp;
+{
+ ENV *env;
+
+ env = dbenv->env;
+
+ if (MUTEX_ON(env)) {
+ /* Cannot be set after open, no lock required to read. */
+ *tas_spinsp = ((DB_MUTEXREGION *)env->
+ mutex_handle->reginfo.primary)->stat.st_mutex_tas_spins;
+ } else
+ *tas_spinsp = dbenv->mutex_tas_spins;
+ return (0);
+}
+
+/*
+ * __mutex_set_tas_spins --
+ * DB_ENV->mutex_set_tas_spins.
+ *
+ * PUBLIC: int __mutex_set_tas_spins __P((DB_ENV *, u_int32_t));
+ */
+int
+__mutex_set_tas_spins(dbenv, tas_spins)
+ DB_ENV *dbenv;
+ u_int32_t tas_spins;
+{
+ ENV *env;
+
+ env = dbenv->env;
+
+ /*
+ * Bound the value -- less than 1 makes no sense, greater than 1M
+ * makes no sense.
+ */
+ if (tas_spins == 0)
+ tas_spins = 1;
+ else if (tas_spins > 1000000)
+ tas_spins = 1000000;
+
+ /*
+ * There's a theoretical race here, but I'm not interested in locking
+ * the test-and-set spin count. The worst possibility is a thread
+ * reads out a bad spin count and spins until it gets the lock, but
+ * that's awfully unlikely.
+ */
+ if (MUTEX_ON(env))
+ ((DB_MUTEXREGION *)env->mutex_handle
+ ->reginfo.primary)->stat.st_mutex_tas_spins = tas_spins;
+ else
+ dbenv->mutex_tas_spins = tas_spins;
+ return (0);
+}
+
+#if !defined(HAVE_ATOMIC_SUPPORT) && defined(HAVE_MUTEX_SUPPORT)
+/*
+ * Provide atomic operations for platforms which have mutexes yet do not have
+ * native atomic operations configured. They are emulated by protected the
+ * operation with a mutex. The address of the atomic value selects which
+ * mutex to use.
+ */
+/*
+ * atomic_get_mutex -
+ * Map an address to the mutex to use to atomically modify it
+ */
+static inline db_mutex_t atomic_get_mutex(env, v)
+ ENV *env;
+ db_atomic_t *v;
+{
+ u_int index;
+ DB_MUTEXREGION *mtxreg;
+
+ if (!MUTEX_ON(env))
+ return (MUTEX_INVALID);
+ index = (u_int)(((uintptr_t) (v)) >> 6) % MAX_ATOMIC_MUTEXES;
+ mtxreg = (DB_MUTEXREGION *)env->mutex_handle->reginfo.primary;
+ return (mtxreg->mtx_atomic[index]);
+}
+
+/*
+ * __atomic_inc
+ * Use a mutex to provide an atomic increment function
+ *
+ * PUBLIC: #if !defined(HAVE_ATOMIC_SUPPORT) && defined(HAVE_MUTEX_SUPPORT)
+ * PUBLIC: atomic_value_t __atomic_inc __P((ENV *, db_atomic_t *));
+ * PUBLIC: #endif
+ */
+atomic_value_t
+__atomic_inc(env, v)
+ ENV *env;
+ db_atomic_t *v;
+{
+ db_mutex_t mtx;
+ int ret;
+
+ mtx = atomic_get_mutex(env, v);
+ MUTEX_LOCK(env, mtx);
+ ret = ++v->value;
+ MUTEX_UNLOCK(env, mtx);
+
+ return (ret);
+}
+
+/*
+ * __atomic_dec
+ * Use a mutex to provide an atomic decrement function
+ *
+ * PUBLIC: #if !defined(HAVE_ATOMIC_SUPPORT) && defined(HAVE_MUTEX_SUPPORT)
+ * PUBLIC: atomic_value_t __atomic_dec __P((ENV *, db_atomic_t *));
+ * PUBLIC: #endif
+ */
+atomic_value_t
+__atomic_dec(env, v)
+ ENV *env;
+ db_atomic_t *v;
+{
+ db_mutex_t mtx;
+ int ret;
+
+ mtx = atomic_get_mutex(env, v);
+ MUTEX_LOCK(env, mtx);
+ ret = --v->value;
+ MUTEX_UNLOCK(env, mtx);
+
+ return (ret);
+}
+
+/*
+ * atomic_compare_exchange
+ * Use a mutex to provide an atomic decrement function
+ *
+ * PRIVATE: int atomic_compare_exchange
+ * PRIVATE: __P((ENV *, db_atomic_t *, atomic_value_t, atomic_value_t));
+ * Returns 1 if the *v was equal to oldval, else 0
+ *
+ * Side Effect:
+ * Sets the value to newval if and only if returning 1
+ */
+int
+atomic_compare_exchange(env, v, oldval, newval)
+ ENV *env;
+ db_atomic_t *v;
+ atomic_value_t oldval;
+ atomic_value_t newval;
+{
+ db_mutex_t mtx;
+ int ret;
+
+ if (atomic_read(v) != oldval)
+ return (0);
+
+ mtx = atomic_get_mutex(env, v);
+ MUTEX_LOCK(env, mtx);
+ ret = atomic_read(v) == oldval;
+ if (ret)
+ atomic_init(v, newval);
+ MUTEX_UNLOCK(env, mtx);
+
+ return (ret);
+}
+#endif
diff --git a/db-4.8.30/mutex/mut_pthread.c b/db-4.8.30/mutex/mut_pthread.c
new file mode 100644
index 0000000..51497bb
--- /dev/null
+++ b/db-4.8.30/mutex/mut_pthread.c
@@ -0,0 +1,638 @@
+/*-
+ * See the file LICENSE for redistribution information.
+ *
+ * Copyright (c) 1999-2009 Oracle. All rights reserved.
+ *
+ * $Id$
+ */
+
+#include "db_config.h"
+
+#include "db_int.h"
+
+/*
+ * This is where we load in architecture/compiler specific mutex code.
+ */
+#define LOAD_ACTUAL_MUTEX_CODE
+
+#ifdef HAVE_MUTEX_SOLARIS_LWP
+#define pthread_cond_destroy(x) 0
+#define pthread_cond_signal _lwp_cond_signal
+#define pthread_cond_broadcast _lwp_cond_broadcast
+#define pthread_cond_wait _lwp_cond_wait
+#define pthread_mutex_destroy(x) 0
+#define pthread_mutex_lock _lwp_mutex_lock
+#define pthread_mutex_trylock _lwp_mutex_trylock
+#define pthread_mutex_unlock _lwp_mutex_unlock
+#endif
+#ifdef HAVE_MUTEX_UI_THREADS
+#define pthread_cond_destroy(x) cond_destroy
+#define pthread_cond_broadcast cond_broadcast
+#define pthread_cond_wait cond_wait
+#define pthread_mutex_destroy mutex_destroy
+#define pthread_mutex_lock mutex_lock
+#define pthread_mutex_trylock mutex_trylock
+#define pthread_mutex_unlock mutex_unlock
+#endif
+
+/*
+ * According to HP-UX engineers contacted by Netscape,
+ * pthread_mutex_unlock() will occasionally return EFAULT for no good reason
+ * on mutexes in shared memory regions, and the correct caller behavior
+ * is to try again. Do so, up to EFAULT_RETRY_ATTEMPTS consecutive times.
+ * Note that we don't bother to restrict this to HP-UX;
+ * it should be harmless elsewhere. [#2471]
+ */
+#define EFAULT_RETRY_ATTEMPTS 5
+#define RETRY_ON_EFAULT(func_invocation, ret) do { \
+ int i; \
+ i = EFAULT_RETRY_ATTEMPTS; \
+ do { \
+ RET_SET((func_invocation), ret); \
+ } while (ret == EFAULT && --i > 0); \
+} while (0)
+
+/*
+ * IBM's MVS pthread mutex implementation returns -1 and sets errno rather than
+ * returning errno itself. As -1 is not a valid errno value, assume functions
+ * returning -1 have set errno. If they haven't, return a random error value.
+ */
+#define RET_SET(f, ret) do { \
+ if (((ret) = (f)) == -1 && ((ret) = errno) == 0) \
+ (ret) = EAGAIN; \
+} while (0)
+
+/*
+ * __db_pthread_mutex_init --
+ * Initialize a pthread mutex: either a native one or
+ * just the mutex for block/wakeup of a hybrid test-and-set mutex
+ *
+ *
+ * PUBLIC: int __db_pthread_mutex_init __P((ENV *, db_mutex_t, u_int32_t));
+ */
+int
+__db_pthread_mutex_init(env, mutex, flags)
+ ENV *env;
+ db_mutex_t mutex;
+ u_int32_t flags;
+{
+ DB_MUTEX *mutexp;
+ DB_MUTEXMGR *mtxmgr;
+ int ret;
+
+ mtxmgr = env->mutex_handle;
+ mutexp = MUTEXP_SET(mtxmgr, mutex);
+ ret = 0;
+
+#ifndef HAVE_MUTEX_HYBRID
+ /* Can't have self-blocking shared latches. */
+ DB_ASSERT(env, !LF_ISSET(DB_MUTEX_SELF_BLOCK) ||
+ !LF_ISSET(DB_MUTEX_SHARED));
+#endif
+
+#ifdef HAVE_MUTEX_PTHREADS
+ {
+ pthread_condattr_t condattr, *condattrp = NULL;
+ pthread_mutexattr_t mutexattr, *mutexattrp = NULL;
+
+#ifndef HAVE_MUTEX_HYBRID
+ if (LF_ISSET(DB_MUTEX_SHARED)) {
+#if defined(HAVE_SHARED_LATCHES)
+ pthread_rwlockattr_t rwlockattr, *rwlockattrp = NULL;
+#ifndef HAVE_MUTEX_THREAD_ONLY
+ if (!LF_ISSET(DB_MUTEX_PROCESS_ONLY)) {
+ RET_SET((pthread_rwlockattr_init(&rwlockattr)), ret);
+ if (ret != 0)
+ goto err;
+ RET_SET((pthread_rwlockattr_setpshared(
+ &rwlockattr, PTHREAD_PROCESS_SHARED)), ret);
+ rwlockattrp = &rwlockattr;
+ }
+#endif
+
+ if (ret == 0)
+ RET_SET((pthread_rwlock_init(&mutexp->u.rwlock,
+ rwlockattrp)), ret);
+ if (rwlockattrp != NULL)
+ (void)pthread_rwlockattr_destroy(rwlockattrp);
+
+ F_SET(mutexp, DB_MUTEX_SHARED);
+ /* For rwlocks, we're done - cannot use the mutex or cond */
+ goto err;
+#endif
+ }
+#endif
+#ifndef HAVE_MUTEX_THREAD_ONLY
+ if (!LF_ISSET(DB_MUTEX_PROCESS_ONLY)) {
+ RET_SET((pthread_mutexattr_init(&mutexattr)), ret);
+ if (ret != 0)
+ goto err;
+ RET_SET((pthread_mutexattr_setpshared(
+ &mutexattr, PTHREAD_PROCESS_SHARED)), ret);
+ mutexattrp = &mutexattr;
+ }
+#endif
+
+ if (ret == 0)
+ RET_SET(
+ (pthread_mutex_init(&mutexp->u.m.mutex, mutexattrp)), ret);
+
+ if (mutexattrp != NULL)
+ (void)pthread_mutexattr_destroy(mutexattrp);
+ if (ret != 0)
+ goto err;
+ if (LF_ISSET(DB_MUTEX_SELF_BLOCK)) {
+#ifndef HAVE_MUTEX_THREAD_ONLY
+ if (!LF_ISSET(DB_MUTEX_PROCESS_ONLY)) {
+ RET_SET((pthread_condattr_init(&condattr)), ret);
+ if (ret != 0)
+ goto err;
+
+ condattrp = &condattr;
+ RET_SET((pthread_condattr_setpshared(
+ &condattr, PTHREAD_PROCESS_SHARED)), ret);
+ }
+#endif
+
+ if (ret == 0)
+ RET_SET((pthread_cond_init(
+ &mutexp->u.m.cond, condattrp)), ret);
+
+ F_SET(mutexp, DB_MUTEX_SELF_BLOCK);
+ if (condattrp != NULL)
+ (void)pthread_condattr_destroy(condattrp);
+ }
+
+ }
+#endif
+#ifdef HAVE_MUTEX_SOLARIS_LWP
+ /*
+ * XXX
+ * Gcc complains about missing braces in the static initializations of
+ * lwp_cond_t and lwp_mutex_t structures because the structures contain
+ * sub-structures/unions and the Solaris include file that defines the
+ * initialization values doesn't have surrounding braces. There's not
+ * much we can do.
+ */
+ if (LF_ISSET(DB_MUTEX_PROCESS_ONLY)) {
+ static lwp_mutex_t mi = DEFAULTMUTEX;
+
+ mutexp->mutex = mi;
+ } else {
+ static lwp_mutex_t mi = SHAREDMUTEX;
+
+ mutexp->mutex = mi;
+ }
+ if (LF_ISSET(DB_MUTEX_SELF_BLOCK)) {
+ if (LF_ISSET(DB_MUTEX_PROCESS_ONLY)) {
+ static lwp_cond_t ci = DEFAULTCV;
+
+ mutexp->cond = ci;
+ } else {
+ static lwp_cond_t ci = SHAREDCV;
+
+ mutexp->cond = ci;
+ }
+ F_SET(mutexp, DB_MUTEX_SELF_BLOCK);
+ }
+#endif
+#ifdef HAVE_MUTEX_UI_THREADS
+ {
+ int type;
+
+ type = LF_ISSET(DB_MUTEX_PROCESS_ONLY) ? USYNC_THREAD : USYNC_PROCESS;
+
+ ret = mutex_init(&mutexp->mutex, type, NULL);
+ if (ret == 0 && LF_ISSET(DB_MUTEX_SELF_BLOCK)) {
+ ret = cond_init(&mutexp->cond, type, NULL);
+
+ F_SET(mutexp, DB_MUTEX_SELF_BLOCK);
+ }}
+#endif
+
+err: if (ret != 0) {
+ __db_err(env, ret, "unable to initialize mutex");
+ }
+ return (ret);
+}
+
+/*
+ * __db_pthread_mutex_lock
+ * Lock on a mutex, blocking if necessary.
+ *
+ * self-blocking shared latches are not supported
+ *
+ * PUBLIC: int __db_pthread_mutex_lock __P((ENV *, db_mutex_t));
+ */
+int
+__db_pthread_mutex_lock(env, mutex)
+ ENV *env;
+ db_mutex_t mutex;
+{
+ DB_ENV *dbenv;
+ DB_MUTEX *mutexp;
+ DB_MUTEXMGR *mtxmgr;
+ DB_THREAD_INFO *ip;
+ int ret;
+
+ dbenv = env->dbenv;
+
+ if (!MUTEX_ON(env) || F_ISSET(dbenv, DB_ENV_NOLOCKING))
+ return (0);
+
+ mtxmgr = env->mutex_handle;
+ mutexp = MUTEXP_SET(mtxmgr, mutex);
+
+ CHECK_MTX_THREAD(env, mutexp);
+
+#if defined(HAVE_STATISTICS) && !defined(HAVE_MUTEX_HYBRID)
+ /*
+ * We want to know which mutexes are contentious, but don't want to
+ * do an interlocked test here -- that's slower when the underlying
+ * system has adaptive mutexes and can perform optimizations like
+ * spinning only if the thread holding the mutex is actually running
+ * on a CPU. Make a guess, using a normal load instruction.
+ */
+ if (F_ISSET(mutexp, DB_MUTEX_LOCKED))
+ ++mutexp->mutex_set_wait;
+ else
+ ++mutexp->mutex_set_nowait;
+#endif
+
+ if (F_ISSET(dbenv, DB_ENV_FAILCHK)) {
+ for (;;) {
+ RET_SET_PTHREAD_TRYLOCK(mutexp, ret);
+ if (ret != EBUSY)
+ break;
+ if (dbenv->is_alive(dbenv,
+ mutexp->pid, mutexp->tid, 0) == 0) {
+ ret = __env_set_state(env, &ip, THREAD_VERIFY);
+ if (ret != 0 ||
+ ip->dbth_state == THREAD_FAILCHK)
+ return (DB_RUNRECOVERY);
+ else {
+ /*
+ * Some thread other than the true
+ * FAILCHK thread in this process is
+ * asking for the mutex held by the
+ * dead process/thread. We will
+ * block here until someone else
+ * does the cleanup. Same behavior
+ * as if we hadnt gone down the 'if
+ * DB_ENV_FAILCHK' path to start with.
+ */
+ RET_SET_PTHREAD_LOCK(mutexp, ret);
+ break;
+ }
+ }
+ }
+ } else
+ RET_SET_PTHREAD_LOCK(mutexp, ret);
+ if (ret != 0)
+ goto err;
+
+ if (F_ISSET(mutexp, DB_MUTEX_SELF_BLOCK)) {
+ /*
+ * If we are using hybrid mutexes then the pthread mutexes
+ * are only used to wait after spinning on the TAS mutex.
+ * Set the wait flag before checking to see if the mutex
+ * is still locked. The holder will clear DB_MUTEX_LOCKED
+ * before checking the wait counter.
+ */
+#ifdef HAVE_MUTEX_HYBRID
+ mutexp->wait++;
+ MUTEX_MEMBAR(mutexp->wait);
+ while (F_ISSET(mutexp, DB_MUTEX_LOCKED)) {
+#else
+ while (MUTEXP_IS_BUSY(mutexp)) {
+#endif
+#if defined(HAVE_MUTEX_HYBRID)
+ STAT(mutexp->hybrid_wait++);
+#endif
+#ifdef MUTEX_DIAG
+ printf("block %d %x wait busy %x count %d\n",
+ mutex, pthread_self(),
+ MUTEXP_BUSY_FIELD(mutexp), mutexp->wait);
+#endif
+
+ RET_SET((pthread_cond_wait(
+ &mutexp->u.m.cond, &mutexp->u.m.mutex)), ret);
+#ifdef MUTEX_DIAG
+ printf("block %d %x wait returns %d busy %x\n",
+ mutex, pthread_self(),
+ ret, MUTEXP_BUSY_FIELD(mutexp));
+#endif
+ /*
+ * !!!
+ * Solaris bug workaround:
+ * pthread_cond_wait() sometimes returns ETIME -- out
+ * of sheer paranoia, check both ETIME and ETIMEDOUT.
+ * We believe this happens when the application uses
+ * SIGALRM for some purpose, e.g., the C library sleep
+ * call, and Solaris delivers the signal to the wrong
+ * LWP.
+ */
+ if (ret != 0 && ret != EINTR &&
+#ifdef ETIME
+ ret != ETIME &&
+#endif
+ ret != ETIMEDOUT) {
+ (void)pthread_mutex_unlock(&mutexp->u.m.mutex);
+ goto err;
+ }
+#ifdef HAVE_MUTEX_HYBRID
+ MUTEX_MEMBAR(mutexp->flags);
+#endif
+ }
+
+#ifdef HAVE_MUTEX_HYBRID
+ mutexp->wait--;
+#else
+ F_SET(mutexp, DB_MUTEX_LOCKED);
+ dbenv->thread_id(dbenv, &mutexp->pid, &mutexp->tid);
+#endif
+
+ /* #2471: HP-UX can sporadically return EFAULT. See above */
+ RETRY_ON_EFAULT(pthread_mutex_unlock(&mutexp->u.m.mutex), ret);
+ if (ret != 0)
+ goto err;
+ } else {
+#ifdef DIAGNOSTIC
+ if (F_ISSET(mutexp, DB_MUTEX_LOCKED)) {
+ char buf[DB_THREADID_STRLEN];
+ (void)dbenv->thread_id_string(dbenv,
+ mutexp->pid, mutexp->tid, buf);
+ __db_errx(env,
+ "pthread lock failed: lock currently in use: pid/tid: %s",
+ buf);
+ ret = EINVAL;
+ goto err;
+ }
+#endif
+ F_SET(mutexp, DB_MUTEX_LOCKED);
+ dbenv->thread_id(dbenv, &mutexp->pid, &mutexp->tid);
+ }
+
+#ifdef DIAGNOSTIC
+ /*
+ * We want to switch threads as often as possible. Yield every time
+ * we get a mutex to ensure contention.
+ */
+ if (F_ISSET(dbenv, DB_ENV_YIELDCPU))
+ __os_yield(env, 0, 0);
+#endif
+ return (0);
+
+err: __db_err(env, ret, "pthread lock failed");
+ return (__env_panic(env, ret));
+}
+
+#if defined(HAVE_SHARED_LATCHES) && !defined(HAVE_MUTEX_HYBRID)
+/*
+ * __db_pthread_mutex_readlock
+ * Take a shared lock on a mutex, blocking if necessary.
+ *
+ * PUBLIC: #if defined(HAVE_SHARED_LATCHES)
+ * PUBLIC: int __db_pthread_mutex_readlock __P((ENV *, db_mutex_t));
+ * PUBLIC: #endif
+ */
+int
+__db_pthread_mutex_readlock(env, mutex)
+ ENV *env;
+ db_mutex_t mutex;
+{
+ DB_ENV *dbenv;
+ DB_MUTEX *mutexp;
+ DB_MUTEXMGR *mtxmgr;
+ int ret;
+
+ dbenv = env->dbenv;
+
+ if (!MUTEX_ON(env) || F_ISSET(dbenv, DB_ENV_NOLOCKING))
+ return (0);
+
+ mtxmgr = env->mutex_handle;
+ mutexp = MUTEXP_SET(mtxmgr, mutex);
+ DB_ASSERT(env, F_ISSET(mutexp, DB_MUTEX_SHARED));
+
+ CHECK_MTX_THREAD(env, mutexp);
+
+#if defined(HAVE_STATISTICS) && !defined(HAVE_MUTEX_HYBRID)
+ /*
+ * We want to know which mutexes are contentious, but don't want to
+ * do an interlocked test here -- that's slower when the underlying
+ * system has adaptive mutexes and can perform optimizations like
+ * spinning only if the thread holding the mutex is actually running
+ * on a CPU. Make a guess, using a normal load instruction.
+ */
+ if (F_ISSET(mutexp, DB_MUTEX_LOCKED))
+ ++mutexp->mutex_set_rd_wait;
+ else
+ ++mutexp->mutex_set_rd_nowait;
+#endif
+
+ RET_SET((pthread_rwlock_rdlock(&mutexp->u.rwlock)), ret);
+ DB_ASSERT(env, !F_ISSET(mutexp, DB_MUTEX_LOCKED));
+ if (ret != 0)
+ goto err;
+
+#ifdef DIAGNOSTIC
+ /*
+ * We want to switch threads as often as possible. Yield every time
+ * we get a mutex to ensure contention.
+ */
+ if (F_ISSET(dbenv, DB_ENV_YIELDCPU))
+ __os_yield(env, 0, 0);
+#endif
+ return (0);
+
+err: __db_err(env, ret, "pthread readlock failed");
+ return (__env_panic(env, ret));
+}
+#endif
+
+/*
+ * __db_pthread_mutex_unlock --
+ * Release a mutex.
+ *
+ * PUBLIC: int __db_pthread_mutex_unlock __P((ENV *, db_mutex_t));
+ */
+int
+__db_pthread_mutex_unlock(env, mutex)
+ ENV *env;
+ db_mutex_t mutex;
+{
+ DB_ENV *dbenv;
+ DB_MUTEX *mutexp;
+ DB_MUTEXMGR *mtxmgr;
+ DB_THREAD_INFO *ip;
+ int ret;
+#if defined(MUTEX_DIAG) && defined(HAVE_MUTEX_HYBRID)
+ int waiters;
+#endif
+
+ dbenv = env->dbenv;
+
+ if (!MUTEX_ON(env) || F_ISSET(dbenv, DB_ENV_NOLOCKING))
+ return (0);
+
+ mtxmgr = env->mutex_handle;
+ mutexp = MUTEXP_SET(mtxmgr, mutex);
+#if defined(MUTEX_DIAG) && defined(HAVE_MUTEX_HYBRID)
+ waiters = mutexp->wait;
+#endif
+
+#if !defined(HAVE_MUTEX_HYBRID) && !defined(HAVE_SHARED_LATCHES) && \
+ defined(DIAGNOSTIC)
+ if (!F_ISSET(mutexp, DB_MUTEX_LOCKED)) {
+ __db_errx(
+ env, "pthread unlock failed: lock already unlocked");
+ return (__env_panic(env, EACCES));
+ }
+#endif
+ if (F_ISSET(mutexp, DB_MUTEX_SELF_BLOCK)) {
+ if (F_ISSET(dbenv, DB_ENV_FAILCHK)) {
+ RET_SET((pthread_mutex_trylock(
+ &mutexp->u.m.mutex)), ret);
+ while (ret == EBUSY) {
+ if (dbenv->is_alive(dbenv,
+ mutexp->pid, mutexp->tid, 0) == 0 ) {
+ ret = __env_set_state(
+ env, &ip, THREAD_VERIFY);
+ if (ret != 0 ||
+ ip->dbth_state == THREAD_FAILCHK)
+ return (DB_RUNRECOVERY);
+ else {
+ /*
+ * We are not the true
+ * failchk thread, so go
+ * ahead and block on mutex
+ * until someone else does the
+ * cleanup. This is the same
+ * behavior we would get if we
+ * hadnt gone down the 'if
+ * DB_ENV_FAILCHK' path.
+ */
+ RET_SET((pthread_mutex_lock(
+ &mutexp->u.m.mutex)), ret);
+ break;
+ }
+ }
+
+ RET_SET((pthread_mutex_trylock(
+ &mutexp->u.m.mutex)), ret);
+ }
+ } else
+ RET_SET((pthread_mutex_lock(&mutexp->u.m.mutex)), ret);
+ if (ret != 0)
+ goto err;
+
+#ifdef HAVE_MUTEX_HYBRID
+ STAT(mutexp->hybrid_wakeup++);
+#else
+ F_CLR(mutexp, DB_MUTEX_LOCKED); /* nop if DB_MUTEX_SHARED */
+#endif
+
+ if (F_ISSET(mutexp, DB_MUTEX_SHARED))
+ RET_SET(
+ (pthread_cond_broadcast(&mutexp->u.m.cond)), ret);
+ else
+ RET_SET((pthread_cond_signal(&mutexp->u.m.cond)), ret);
+ if (ret != 0)
+ goto err;
+ } else {
+#ifndef HAVE_MUTEX_HYBRID
+ F_CLR(mutexp, DB_MUTEX_LOCKED);
+#endif
+ }
+
+ /* See comment above; workaround for [#2471]. */
+#if defined(HAVE_SHARED_LATCHES) && !defined(HAVE_MUTEX_HYBRID)
+ if (F_ISSET(mutexp, DB_MUTEX_SHARED))
+ RETRY_ON_EFAULT(pthread_rwlock_unlock(&mutexp->u.rwlock), ret);
+ else
+#endif
+ RETRY_ON_EFAULT(pthread_mutex_unlock(&mutexp->u.m.mutex), ret);
+
+err: if (ret != 0) {
+ __db_err(env, ret, "pthread unlock failed");
+ return (__env_panic(env, ret));
+ }
+#if defined(MUTEX_DIAG) && defined(HAVE_MUTEX_HYBRID)
+ if (!MUTEXP_IS_BUSY(mutexp) && mutexp->wait != 0)
+ printf("unlock %d %x busy %x waiters %d/%d\n",
+ mutex, pthread_self(), ret,
+ MUTEXP_BUSY_FIELD(mutexp), waiters, mutexp->wait);
+#endif
+ return (ret);
+}
+
+/*
+ * __db_pthread_mutex_destroy --
+ * Destroy a mutex.
+ * If it is a native shared latch (not hybrid) then
+ * destroy only one half of the rwlock/mutex&cond union,
+ * depending whether it was allocated as shared
+ *
+ * PUBLIC: int __db_pthread_mutex_destroy __P((ENV *, db_mutex_t));
+ */
+int
+__db_pthread_mutex_destroy(env, mutex)
+ ENV *env;
+ db_mutex_t mutex;
+{
+ DB_MUTEX *mutexp;
+ DB_MUTEXMGR *mtxmgr;
+ DB_THREAD_INFO *ip;
+ int ret, t_ret, failchk_thread;
+
+ if (!MUTEX_ON(env))
+ return (0);
+
+ mtxmgr = env->mutex_handle;
+ mutexp = MUTEXP_SET(mtxmgr, mutex);
+
+ ret = 0;
+ failchk_thread = FALSE;
+ /* Get information to determine if we are really the failchk thread. */
+ if (F_ISSET(env->dbenv, DB_ENV_FAILCHK)) {
+ ret = __env_set_state(env, &ip, THREAD_VERIFY);
+ if (ip != NULL && ip->dbth_state == THREAD_FAILCHK)
+ failchk_thread = TRUE;
+ }
+
+#ifndef HAVE_MUTEX_HYBRID
+ if (F_ISSET(mutexp, DB_MUTEX_SHARED)) {
+#if defined(HAVE_SHARED_LATCHES)
+ /*
+ * If there were dead processes waiting on the condition
+ * we may not be able to destroy it. Let failchk thread skip
+ * this. XXX What operating system resources might this leak?
+ */
+ if (!failchk_thread)
+ RET_SET(
+ (pthread_rwlock_destroy(&mutexp->u.rwlock)), ret);
+ /* For rwlocks, we're done - must not destroy rest of union */
+ return (ret);
+#endif
+ }
+#endif
+ if (F_ISSET(mutexp, DB_MUTEX_SELF_BLOCK)) {
+ /*
+ * If there were dead processes waiting on the condition
+ * we may not be able to destroy it. Let failchk thread
+ * skip this.
+ */
+ if (!failchk_thread)
+ RET_SET((pthread_cond_destroy(&mutexp->u.m.cond)), ret);
+ if (ret != 0)
+ __db_err(env, ret, "unable to destroy cond");
+ }
+ RET_SET((pthread_mutex_destroy(&mutexp->u.m.mutex)), t_ret);
+ if (t_ret != 0 && !failchk_thread) {
+ __db_err(env, t_ret, "unable to destroy mutex");
+ if (ret == 0)
+ ret = t_ret;
+ }
+ return (ret);
+}
diff --git a/db-4.8.30/mutex/mut_region.c b/db-4.8.30/mutex/mut_region.c
new file mode 100644
index 0000000..e985ac2
--- /dev/null
+++ b/db-4.8.30/mutex/mut_region.c
@@ -0,0 +1,407 @@
+/*-
+ * See the file LICENSE for redistribution information.
+ *
+ * Copyright (c) 1996-2009 Oracle. All rights reserved.
+ *
+ * $Id$
+ */
+
+#include "db_config.h"
+
+#include "db_int.h"
+#include "dbinc/log.h"
+#include "dbinc/lock.h"
+#include "dbinc/mp.h"
+#include "dbinc/txn.h"
+
+static size_t __mutex_align_size __P((ENV *));
+static int __mutex_region_init __P((ENV *, DB_MUTEXMGR *));
+static size_t __mutex_region_size __P((ENV *));
+
+/*
+ * __mutex_open --
+ * Open a mutex region.
+ *
+ * PUBLIC: int __mutex_open __P((ENV *, int));
+ */
+int
+__mutex_open(env, create_ok)
+ ENV *env;
+ int create_ok;
+{
+ DB_ENV *dbenv;
+ DB_MUTEXMGR *mtxmgr;
+ DB_MUTEXREGION *mtxregion;
+ db_mutex_t mutex;
+ u_int32_t cpu_count;
+ u_int i;
+ int ret;
+
+ dbenv = env->dbenv;
+
+ /*
+ * Initialize the ENV handle information if not already initialized.
+ *
+ * Align mutexes on the byte boundaries specified by the application.
+ */
+ if (dbenv->mutex_align == 0)
+ dbenv->mutex_align = MUTEX_ALIGN;
+ if (dbenv->mutex_tas_spins == 0) {
+ cpu_count = __os_cpu_count();
+ if ((ret = __mutex_set_tas_spins(dbenv, cpu_count == 1 ?
+ cpu_count : cpu_count * MUTEX_SPINS_PER_PROCESSOR)) != 0)
+ return (ret);
+ }
+
+ /*
+ * If the user didn't set an absolute value on the number of mutexes
+ * we'll need, figure it out. We're conservative in our allocation,
+ * we need mutexes for DB handles, group-commit queues and other things
+ * applications allocate at run-time. The application may have kicked
+ * up our count to allocate its own mutexes, add that in.
+ */
+ if (dbenv->mutex_cnt == 0)
+ dbenv->mutex_cnt =
+ __lock_region_mutex_count(env) +
+ __log_region_mutex_count(env) +
+ __memp_region_mutex_count(env) +
+ __txn_region_mutex_count(env) +
+ dbenv->mutex_inc + 100;
+
+ /* Create/initialize the mutex manager structure. */
+ if ((ret = __os_calloc(env, 1, sizeof(DB_MUTEXMGR), &mtxmgr)) != 0)
+ return (ret);
+
+ /* Join/create the mutex region. */
+ mtxmgr->reginfo.env = env;
+ mtxmgr->reginfo.type = REGION_TYPE_MUTEX;
+ mtxmgr->reginfo.id = INVALID_REGION_ID;
+ mtxmgr->reginfo.flags = REGION_JOIN_OK;
+ if (create_ok)
+ F_SET(&mtxmgr->reginfo, REGION_CREATE_OK);
+ if ((ret = __env_region_attach(env,
+ &mtxmgr->reginfo, __mutex_region_size(env))) != 0)
+ goto err;
+
+ /* If we created the region, initialize it. */
+ if (F_ISSET(&mtxmgr->reginfo, REGION_CREATE))
+ if ((ret = __mutex_region_init(env, mtxmgr)) != 0)
+ goto err;
+
+ /* Set the local addresses. */
+ mtxregion = mtxmgr->reginfo.primary =
+ R_ADDR(&mtxmgr->reginfo, mtxmgr->reginfo.rp->primary);
+ mtxmgr->mutex_array = R_ADDR(&mtxmgr->reginfo, mtxregion->mutex_off);
+
+ env->mutex_handle = mtxmgr;
+
+ /* Allocate initial queue of mutexes. */
+ if (env->mutex_iq != NULL) {
+ DB_ASSERT(env, F_ISSET(&mtxmgr->reginfo, REGION_CREATE));
+ for (i = 0; i < env->mutex_iq_next; ++i) {
+ if ((ret = __mutex_alloc_int(
+ env, 0, env->mutex_iq[i].alloc_id,
+ env->mutex_iq[i].flags, &mutex)) != 0)
+ goto err;
+ /*
+ * Confirm we allocated the right index, correcting
+ * for avoiding slot 0 (MUTEX_INVALID).
+ */
+ DB_ASSERT(env, mutex == i + 1);
+ }
+ __os_free(env, env->mutex_iq);
+ env->mutex_iq = NULL;
+#ifndef HAVE_ATOMIC_SUPPORT
+ /* If necessary allocate the atomic emulation mutexes. */
+ for (i = 0; i != MAX_ATOMIC_MUTEXES; i++)
+ if ((ret = __mutex_alloc_int(
+ env, 0, MTX_ATOMIC_EMULATION,
+ 0, &mtxregion->mtx_atomic[i])) != 0)
+ return (ret);
+#endif
+
+ /*
+ * This is the first place we can test mutexes and we need to
+ * know if they're working. (They CAN fail, for example on
+ * SunOS, when using fcntl(2) for locking and using an
+ * in-memory filesystem as the database environment directory.
+ * But you knew that, I'm sure -- it probably wasn't worth
+ * mentioning.)
+ */
+ mutex = MUTEX_INVALID;
+ if ((ret =
+ __mutex_alloc(env, MTX_MUTEX_TEST, 0, &mutex) != 0) ||
+ (ret = __mutex_lock(env, mutex)) != 0 ||
+ (ret = __mutex_unlock(env, mutex)) != 0 ||
+ (ret = __mutex_trylock(env, mutex)) != 0 ||
+ (ret = __mutex_unlock(env, mutex)) != 0 ||
+ (ret = __mutex_free(env, &mutex)) != 0) {
+ __db_errx(env,
+ "Unable to acquire/release a mutex; check configuration");
+ goto err;
+ }
+#ifdef HAVE_SHARED_LATCHES
+ if ((ret =
+ __mutex_alloc(env,
+ MTX_MUTEX_TEST, DB_MUTEX_SHARED, &mutex) != 0) ||
+ (ret = __mutex_lock(env, mutex)) != 0 ||
+ (ret = __mutex_unlock(env, mutex)) != 0 ||
+ (ret = __mutex_rdlock(env, mutex)) != 0 ||
+ (ret = __mutex_rdlock(env, mutex)) != 0 ||
+ (ret = __mutex_unlock(env, mutex)) != 0 ||
+ (ret = __mutex_unlock(env, mutex)) != 0 ||
+ (ret = __mutex_free(env, &mutex)) != 0) {
+ __db_errx(env,
+ "Unable to acquire/release a shared latch; check configuration");
+ goto err;
+ }
+#endif
+ }
+ return (0);
+
+err: env->mutex_handle = NULL;
+ if (mtxmgr->reginfo.addr != NULL)
+ (void)__env_region_detach(env, &mtxmgr->reginfo, 0);
+
+ __os_free(env, mtxmgr);
+ return (ret);
+}
+
+/*
+ * __mutex_region_init --
+ * Initialize a mutex region in shared memory.
+ */
+static int
+__mutex_region_init(env, mtxmgr)
+ ENV *env;
+ DB_MUTEXMGR *mtxmgr;
+{
+ DB_ENV *dbenv;
+ DB_MUTEX *mutexp;
+ DB_MUTEXREGION *mtxregion;
+ db_mutex_t i;
+ int ret;
+ void *mutex_array;
+
+ dbenv = env->dbenv;
+
+ COMPQUIET(mutexp, NULL);
+
+ if ((ret = __env_alloc(&mtxmgr->reginfo,
+ sizeof(DB_MUTEXREGION), &mtxmgr->reginfo.primary)) != 0) {
+ __db_errx(env,
+ "Unable to allocate memory for the mutex region");
+ return (ret);
+ }
+ mtxmgr->reginfo.rp->primary =
+ R_OFFSET(&mtxmgr->reginfo, mtxmgr->reginfo.primary);
+ mtxregion = mtxmgr->reginfo.primary;
+ memset(mtxregion, 0, sizeof(*mtxregion));
+
+ if ((ret = __mutex_alloc(
+ env, MTX_MUTEX_REGION, 0, &mtxregion->mtx_region)) != 0)
+ return (ret);
+ mtxmgr->reginfo.mtx_alloc = mtxregion->mtx_region;
+
+ mtxregion->mutex_size = __mutex_align_size(env);
+
+ mtxregion->stat.st_mutex_align = dbenv->mutex_align;
+ mtxregion->stat.st_mutex_cnt = dbenv->mutex_cnt;
+ mtxregion->stat.st_mutex_tas_spins = dbenv->mutex_tas_spins;
+
+ /*
+ * Get a chunk of memory to be used for the mutexes themselves. Each
+ * piece of the memory must be properly aligned, and that alignment
+ * may be more restrictive than the memory alignment returned by the
+ * underlying allocation code. We already know how much memory each
+ * mutex in the array will take up, but we need to offset the first
+ * mutex in the array so the array begins properly aligned.
+ *
+ * The OOB mutex (MUTEX_INVALID) is 0. To make this work, we ignore
+ * the first allocated slot when we build the free list. We have to
+ * correct the count by 1 here, though, otherwise our counter will be
+ * off by 1.
+ */
+ if ((ret = __env_alloc(&mtxmgr->reginfo,
+ mtxregion->stat.st_mutex_align +
+ (mtxregion->stat.st_mutex_cnt + 1) * mtxregion->mutex_size,
+ &mutex_array)) != 0) {
+ __db_errx(env,
+ "Unable to allocate memory for mutexes from the region");
+ return (ret);
+ }
+
+ mtxregion->mutex_off_alloc = R_OFFSET(&mtxmgr->reginfo, mutex_array);
+ mutex_array = ALIGNP_INC(mutex_array, mtxregion->stat.st_mutex_align);
+ mtxregion->mutex_off = R_OFFSET(&mtxmgr->reginfo, mutex_array);
+ mtxmgr->mutex_array = mutex_array;
+
+ /*
+ * Put the mutexes on a free list and clear the allocated flag.
+ *
+ * The OOB mutex (MUTEX_INVALID) is 0, skip it.
+ *
+ * The comparison is <, not <=, because we're looking ahead one
+ * in each link.
+ */
+ for (i = 1; i < mtxregion->stat.st_mutex_cnt; ++i) {
+ mutexp = MUTEXP_SET(mtxmgr, i);
+ mutexp->flags = 0;
+ mutexp->mutex_next_link = i + 1;
+ }
+ mutexp = MUTEXP_SET(mtxmgr, i);
+ mutexp->flags = 0;
+ mutexp->mutex_next_link = MUTEX_INVALID;
+ mtxregion->mutex_next = 1;
+ mtxregion->stat.st_mutex_free = mtxregion->stat.st_mutex_cnt;
+ mtxregion->stat.st_mutex_inuse = mtxregion->stat.st_mutex_inuse_max = 0;
+
+ return (0);
+}
+
+/*
+ * __mutex_env_refresh --
+ * Clean up after the mutex region on a close or failed open.
+ *
+ * PUBLIC: int __mutex_env_refresh __P((ENV *));
+ */
+int
+__mutex_env_refresh(env)
+ ENV *env;
+{
+ DB_MUTEXMGR *mtxmgr;
+ DB_MUTEXREGION *mtxregion;
+ REGINFO *reginfo;
+ int ret;
+
+ mtxmgr = env->mutex_handle;
+ reginfo = &mtxmgr->reginfo;
+ mtxregion = mtxmgr->reginfo.primary;
+
+ /*
+ * If a private region, return the memory to the heap. Not needed for
+ * filesystem-backed or system shared memory regions, that memory isn't
+ * owned by any particular process.
+ */
+ if (F_ISSET(env, ENV_PRIVATE)) {
+ reginfo->mtx_alloc = MUTEX_INVALID;
+
+#ifdef HAVE_MUTEX_SYSTEM_RESOURCES
+ /*
+ * If destroying the mutex region, return any system resources
+ * to the system.
+ */
+ __mutex_resource_return(env, reginfo);
+#endif
+ /* Discard the mutex array. */
+ __env_alloc_free(
+ reginfo, R_ADDR(reginfo, mtxregion->mutex_off_alloc));
+ }
+
+ /* Detach from the region. */
+ ret = __env_region_detach(env, reginfo, 0);
+
+ __os_free(env, mtxmgr);
+
+ env->mutex_handle = NULL;
+
+ return (ret);
+}
+
+/*
+ * __mutex_align_size --
+ * Return how much memory each mutex will take up if an array of them
+ * are to be properly aligned, individually, within the array.
+ */
+static size_t
+__mutex_align_size(env)
+ ENV *env;
+{
+ DB_ENV *dbenv;
+
+ dbenv = env->dbenv;
+
+ return ((size_t)DB_ALIGN(sizeof(DB_MUTEX), dbenv->mutex_align));
+}
+
+/*
+ * __mutex_region_size --
+ * Return the amount of space needed for the mutex region.
+ */
+static size_t
+__mutex_region_size(env)
+ ENV *env;
+{
+ DB_ENV *dbenv;
+ size_t s;
+
+ dbenv = env->dbenv;
+
+ s = sizeof(DB_MUTEXMGR) + 1024;
+
+ /* We discard one mutex for the OOB slot. */
+ s += __env_alloc_size(
+ (dbenv->mutex_cnt + 1) *__mutex_align_size(env));
+
+ return (s);
+}
+
+#ifdef HAVE_MUTEX_SYSTEM_RESOURCES
+/*
+ * __mutex_resource_return
+ * Return any system-allocated mutex resources to the system.
+ *
+ * PUBLIC: void __mutex_resource_return __P((ENV *, REGINFO *));
+ */
+void
+__mutex_resource_return(env, infop)
+ ENV *env;
+ REGINFO *infop;
+{
+ DB_MUTEX *mutexp;
+ DB_MUTEXMGR *mtxmgr, mtxmgr_st;
+ DB_MUTEXREGION *mtxregion;
+ db_mutex_t i;
+ void *orig_handle;
+
+ /*
+ * This routine is called in two cases: when discarding the regions
+ * from a previous Berkeley DB run, during recovery, and two, when
+ * discarding regions as we shut down the database environment.
+ *
+ * Walk the list of mutexes and destroy any live ones.
+ *
+ * This is just like joining a region -- the REGINFO we're handed is
+ * the same as the one returned by __env_region_attach(), all we have
+ * to do is fill in the links.
+ *
+ * !!!
+ * The region may be corrupted, of course. We're safe because the
+ * only things we look at are things that are initialized when the
+ * region is created, and never modified after that.
+ */
+ memset(&mtxmgr_st, 0, sizeof(mtxmgr_st));
+ mtxmgr = &mtxmgr_st;
+ mtxmgr->reginfo = *infop;
+ mtxregion = mtxmgr->reginfo.primary =
+ R_ADDR(&mtxmgr->reginfo, mtxmgr->reginfo.rp->primary);
+ mtxmgr->mutex_array = R_ADDR(&mtxmgr->reginfo, mtxregion->mutex_off);
+
+ /*
+ * This is a little strange, but the mutex_handle is what all of the
+ * underlying mutex routines will use to determine if they should do
+ * any work and to find their information. Save/restore the handle
+ * around the work loop.
+ *
+ * The OOB mutex (MUTEX_INVALID) is 0, skip it.
+ */
+ orig_handle = env->mutex_handle;
+ env->mutex_handle = mtxmgr;
+ for (i = 1; i <= mtxregion->stat.st_mutex_cnt; ++i, ++mutexp) {
+ mutexp = MUTEXP_SET(mtxmgr, i);
+ if (F_ISSET(mutexp, DB_MUTEX_ALLOCATED))
+ (void)__mutex_destroy(env, i);
+ }
+ env->mutex_handle = orig_handle;
+}
+#endif
diff --git a/db-4.8.30/mutex/mut_stat.c b/db-4.8.30/mutex/mut_stat.c
new file mode 100644
index 0000000..ecf6a7b
--- /dev/null
+++ b/db-4.8.30/mutex/mut_stat.c
@@ -0,0 +1,521 @@
+/*-
+ * See the file LICENSE for redistribution information.
+ *
+ * Copyright (c) 1996-2009 Oracle. All rights reserved.
+ *
+ * $Id$
+ */
+
+#include "db_config.h"
+
+#include "db_int.h"
+#include "dbinc/db_page.h"
+#include "dbinc/db_am.h"
+
+#ifdef HAVE_STATISTICS
+static int __mutex_print_all __P((ENV *, u_int32_t));
+static const char *__mutex_print_id __P((int));
+static int __mutex_print_stats __P((ENV *, u_int32_t));
+static void __mutex_print_summary __P((ENV *));
+static int __mutex_stat __P((ENV *, DB_MUTEX_STAT **, u_int32_t));
+
+/*
+ * __mutex_stat_pp --
+ * ENV->mutex_stat pre/post processing.
+ *
+ * PUBLIC: int __mutex_stat_pp __P((DB_ENV *, DB_MUTEX_STAT **, u_int32_t));
+ */
+int
+__mutex_stat_pp(dbenv, statp, flags)
+ DB_ENV *dbenv;
+ DB_MUTEX_STAT **statp;
+ u_int32_t flags;
+{
+ DB_THREAD_INFO *ip;
+ ENV *env;
+ int ret;
+
+ env = dbenv->env;
+
+ if ((ret = __db_fchk(env,
+ "DB_ENV->mutex_stat", flags, DB_STAT_CLEAR)) != 0)
+ return (ret);
+
+ ENV_ENTER(env, ip);
+ REPLICATION_WRAP(env, (__mutex_stat(env, statp, flags)), 0, ret);
+ ENV_LEAVE(env, ip);
+ return (ret);
+}
+
+/*
+ * __mutex_stat --
+ * ENV->mutex_stat.
+ */
+static int
+__mutex_stat(env, statp, flags)
+ ENV *env;
+ DB_MUTEX_STAT **statp;
+ u_int32_t flags;
+{
+ DB_MUTEXMGR *mtxmgr;
+ DB_MUTEXREGION *mtxregion;
+ DB_MUTEX_STAT *stats;
+ int ret;
+
+ *statp = NULL;
+ mtxmgr = env->mutex_handle;
+ mtxregion = mtxmgr->reginfo.primary;
+
+ if ((ret = __os_umalloc(env, sizeof(DB_MUTEX_STAT), &stats)) != 0)
+ return (ret);
+
+ MUTEX_SYSTEM_LOCK(env);
+
+ /*
+ * Most fields are maintained in the underlying region structure.
+ * Region size and region mutex are not.
+ */
+ *stats = mtxregion->stat;
+ stats->st_regsize = mtxmgr->reginfo.rp->size;
+ __mutex_set_wait_info(env, mtxregion->mtx_region,
+ &stats->st_region_wait, &stats->st_region_nowait);
+ if (LF_ISSET(DB_STAT_CLEAR))
+ __mutex_clear(env, mtxregion->mtx_region);
+
+ MUTEX_SYSTEM_UNLOCK(env);
+
+ *statp = stats;
+ return (0);
+}
+
+/*
+ * __mutex_stat_print_pp --
+ * ENV->mutex_stat_print pre/post processing.
+ *
+ * PUBLIC: int __mutex_stat_print_pp __P((DB_ENV *, u_int32_t));
+ */
+int
+__mutex_stat_print_pp(dbenv, flags)
+ DB_ENV *dbenv;
+ u_int32_t flags;
+{
+ DB_THREAD_INFO *ip;
+ ENV *env;
+ int ret;
+
+ env = dbenv->env;
+
+ if ((ret = __db_fchk(env, "DB_ENV->mutex_stat_print",
+ flags, DB_STAT_ALL | DB_STAT_CLEAR)) != 0)
+ return (ret);
+
+ ENV_ENTER(env, ip);
+ REPLICATION_WRAP(env, (__mutex_stat_print(env, flags)), 0, ret);
+ ENV_LEAVE(env, ip);
+ return (ret);
+}
+
+/*
+ * __mutex_stat_print
+ * ENV->mutex_stat_print method.
+ *
+ * PUBLIC: int __mutex_stat_print __P((ENV *, u_int32_t));
+ */
+int
+__mutex_stat_print(env, flags)
+ ENV *env;
+ u_int32_t flags;
+{
+ u_int32_t orig_flags;
+ int ret;
+
+ orig_flags = flags;
+ LF_CLR(DB_STAT_CLEAR | DB_STAT_SUBSYSTEM);
+ if (flags == 0 || LF_ISSET(DB_STAT_ALL)) {
+ ret = __mutex_print_stats(env, orig_flags);
+ __mutex_print_summary(env);
+ if (flags == 0 || ret != 0)
+ return (ret);
+ }
+
+ if (LF_ISSET(DB_STAT_ALL))
+ ret = __mutex_print_all(env, orig_flags);
+
+ return (0);
+}
+
+static void
+__mutex_print_summary(env)
+ ENV *env;
+{
+ DB_MUTEX *mutexp;
+ DB_MUTEXMGR *mtxmgr;
+ DB_MUTEXREGION *mtxregion;
+ db_mutex_t i;
+ u_int32_t counts[MTX_MAX_ENTRY + 2];
+ int alloc_id;
+
+ mtxmgr = env->mutex_handle;
+ mtxregion = mtxmgr->reginfo.primary;
+ memset(counts, 0, sizeof(counts));
+
+ for (i = 1; i <= mtxregion->stat.st_mutex_cnt; ++i, ++mutexp) {
+ mutexp = MUTEXP_SET(mtxmgr, i);
+
+ if (!F_ISSET(mutexp, DB_MUTEX_ALLOCATED))
+ counts[0]++;
+ else if (mutexp->alloc_id > MTX_MAX_ENTRY)
+ counts[MTX_MAX_ENTRY + 1]++;
+ else
+ counts[mutexp->alloc_id]++;
+ }
+ __db_msg(env, "Mutex counts");
+ __db_msg(env, "%d\tUnallocated", counts[0]);
+ for (alloc_id = 1; alloc_id <= MTX_TXN_REGION + 1; alloc_id++)
+ if (counts[alloc_id] != 0)
+ __db_msg(env, "%lu\t%s",
+ (u_long)counts[alloc_id],
+ __mutex_print_id(alloc_id));
+
+}
+
+/*
+ * __mutex_print_stats --
+ * Display default mutex region statistics.
+ */
+static int
+__mutex_print_stats(env, flags)
+ ENV *env;
+ u_int32_t flags;
+{
+ DB_MUTEX_STAT *sp;
+ int ret;
+
+ if ((ret = __mutex_stat(env, &sp, LF_ISSET(DB_STAT_CLEAR))) != 0)
+ return (ret);
+
+ if (LF_ISSET(DB_STAT_ALL))
+ __db_msg(env, "Default mutex region information:");
+
+ __db_dlbytes(env, "Mutex region size",
+ (u_long)0, (u_long)0, (u_long)sp->st_regsize);
+ __db_dl_pct(env,
+ "The number of region locks that required waiting",
+ (u_long)sp->st_region_wait, DB_PCT(sp->st_region_wait,
+ sp->st_region_wait + sp->st_region_nowait), NULL);
+ STAT_ULONG("Mutex alignment", sp->st_mutex_align);
+ STAT_ULONG("Mutex test-and-set spins", sp->st_mutex_tas_spins);
+ STAT_ULONG("Mutex total count", sp->st_mutex_cnt);
+ STAT_ULONG("Mutex free count", sp->st_mutex_free);
+ STAT_ULONG("Mutex in-use count", sp->st_mutex_inuse);
+ STAT_ULONG("Mutex maximum in-use count", sp->st_mutex_inuse_max);
+
+ __os_ufree(env, sp);
+
+ return (0);
+}
+
+/*
+ * __mutex_print_all --
+ * Display debugging mutex region statistics.
+ */
+static int
+__mutex_print_all(env, flags)
+ ENV *env;
+ u_int32_t flags;
+{
+ static const FN fn[] = {
+ { DB_MUTEX_ALLOCATED, "alloc" },
+ { DB_MUTEX_LOCKED, "locked" },
+ { DB_MUTEX_LOGICAL_LOCK, "logical" },
+ { DB_MUTEX_PROCESS_ONLY, "process-private" },
+ { DB_MUTEX_SELF_BLOCK, "self-block" },
+ { 0, NULL }
+ };
+ DB_MSGBUF mb, *mbp;
+ DB_MUTEX *mutexp;
+ DB_MUTEXMGR *mtxmgr;
+ DB_MUTEXREGION *mtxregion;
+ db_mutex_t i;
+
+ DB_MSGBUF_INIT(&mb);
+ mbp = &mb;
+
+ mtxmgr = env->mutex_handle;
+ mtxregion = mtxmgr->reginfo.primary;
+
+ __db_print_reginfo(env, &mtxmgr->reginfo, "Mutex", flags);
+ __db_msg(env, "%s", DB_GLOBAL(db_line));
+
+ __db_msg(env, "DB_MUTEXREGION structure:");
+ __mutex_print_debug_single(env,
+ "DB_MUTEXREGION region mutex", mtxregion->mtx_region, flags);
+ STAT_ULONG("Size of the aligned mutex", mtxregion->mutex_size);
+ STAT_ULONG("Next free mutex", mtxregion->mutex_next);
+
+ /*
+ * The OOB mutex (MUTEX_INVALID) is 0, skip it.
+ *
+ * We're not holding the mutex region lock, so we're racing threads of
+ * control allocating mutexes. That's OK, it just means we display or
+ * clear statistics while mutexes are moving.
+ */
+ __db_msg(env, "%s", DB_GLOBAL(db_line));
+ __db_msg(env, "mutex\twait/nowait, pct wait, holder, flags");
+ for (i = 1; i <= mtxregion->stat.st_mutex_cnt; ++i, ++mutexp) {
+ mutexp = MUTEXP_SET(mtxmgr, i);
+
+ if (!F_ISSET(mutexp, DB_MUTEX_ALLOCATED))
+ continue;
+
+ __db_msgadd(env, mbp, "%5lu\t", (u_long)i);
+
+ __mutex_print_debug_stats(env, mbp, i, flags);
+
+ if (mutexp->alloc_id != 0)
+ __db_msgadd(env,
+ mbp, ", %s", __mutex_print_id(mutexp->alloc_id));
+
+ __db_prflags(env, mbp, mutexp->flags, fn, " (", ")");
+
+ DB_MSGBUF_FLUSH(env, mbp);
+ }
+
+ return (0);
+}
+
+/*
+ * __mutex_print_debug_single --
+ * Print mutex internal debugging statistics for a single mutex on a
+ * single output line.
+ *
+ * PUBLIC: void __mutex_print_debug_single
+ * PUBLIC: __P((ENV *, const char *, db_mutex_t, u_int32_t));
+ */
+void
+__mutex_print_debug_single(env, tag, mutex, flags)
+ ENV *env;
+ const char *tag;
+ db_mutex_t mutex;
+ u_int32_t flags;
+{
+ DB_MSGBUF mb, *mbp;
+
+ DB_MSGBUF_INIT(&mb);
+ mbp = &mb;
+
+ if (LF_ISSET(DB_STAT_SUBSYSTEM))
+ LF_CLR(DB_STAT_CLEAR);
+ __db_msgadd(env, mbp, "%lu\t%s ", (u_long)mutex, tag);
+ __mutex_print_debug_stats(env, mbp, mutex, flags);
+ DB_MSGBUF_FLUSH(env, mbp);
+}
+
+/*
+ * __mutex_print_debug_stats --
+ * Print mutex internal debugging statistics, that is, the statistics
+ * in the [] square brackets.
+ *
+ * PUBLIC: void __mutex_print_debug_stats
+ * PUBLIC: __P((ENV *, DB_MSGBUF *, db_mutex_t, u_int32_t));
+ */
+void
+__mutex_print_debug_stats(env, mbp, mutex, flags)
+ ENV *env;
+ DB_MSGBUF *mbp;
+ db_mutex_t mutex;
+ u_int32_t flags;
+{
+ DB_ENV *dbenv;
+ DB_MUTEX *mutexp;
+ DB_MUTEXMGR *mtxmgr;
+ u_long value;
+ char buf[DB_THREADID_STRLEN];
+#if defined(HAVE_SHARED_LATCHES) && defined(HAVE_MUTEX_HYBRID)
+ int sharecount;
+#endif
+
+ if (mutex == MUTEX_INVALID) {
+ __db_msgadd(env, mbp, "[!Set]");
+ return;
+ }
+
+ dbenv = env->dbenv;
+ mtxmgr = env->mutex_handle;
+ mutexp = MUTEXP_SET(mtxmgr, mutex);
+
+ __db_msgadd(env, mbp, "[");
+ if ((value = mutexp->mutex_set_wait) < 10000000)
+ __db_msgadd(env, mbp, "%lu", value);
+ else
+ __db_msgadd(env, mbp, "%luM", value / 1000000);
+ if ((value = mutexp->mutex_set_nowait) < 10000000)
+ __db_msgadd(env, mbp, "/%lu", value);
+ else
+ __db_msgadd(env, mbp, "/%luM", value / 1000000);
+
+ __db_msgadd(env, mbp, " %d%% ",
+ DB_PCT(mutexp->mutex_set_wait,
+ mutexp->mutex_set_wait + mutexp->mutex_set_nowait));
+
+#if defined(HAVE_SHARED_LATCHES)
+ if (F_ISSET(mutexp, DB_MUTEX_SHARED)) {
+ __db_msgadd(env, mbp, " rd ");
+ if ((value = mutexp->mutex_set_rd_wait) < 10000000)
+ __db_msgadd(env, mbp, "%lu", value);
+ else
+ __db_msgadd(env, mbp, "%luM", value / 1000000);
+ if ((value = mutexp->mutex_set_rd_nowait) < 10000000)
+ __db_msgadd(env, mbp, "/%lu", value);
+ else
+ __db_msgadd(env, mbp, "/%luM", value / 1000000);
+ __db_msgadd(env, mbp, " %d%% ",
+ DB_PCT(mutexp->mutex_set_rd_wait,
+ mutexp->mutex_set_rd_wait + mutexp->mutex_set_rd_nowait));
+ }
+#endif
+
+ if (F_ISSET(mutexp, DB_MUTEX_LOCKED))
+ __db_msgadd(env, mbp, "%s]",
+ dbenv->thread_id_string(dbenv,
+ mutexp->pid, mutexp->tid, buf));
+ /* Only hybrid shared latches expose the share count. */
+#if defined(HAVE_SHARED_LATCHES) && defined(HAVE_MUTEX_HYBRID)
+ else if (F_ISSET(mutexp, DB_MUTEX_SHARED) &&
+ (sharecount = atomic_read(&mutexp->sharecount)) != 0) {
+ if (sharecount == 1)
+ __db_msgadd(env, mbp, "1 reader");
+ else
+ __db_msgadd(env, mbp, "%d readers", sharecount);
+ /* Show the thread which last acquired the latch. */
+ __db_msgadd(env, mbp, "%s]",
+ dbenv->thread_id_string(dbenv,
+ mutexp->pid, mutexp->tid, buf));
+ }
+#endif
+ else
+ __db_msgadd(env, mbp, "!Own]");
+
+#ifdef HAVE_MUTEX_HYBRID
+ if (mutexp->hybrid_wait != 0 || mutexp->hybrid_wakeup != 0)
+ __db_msgadd(env, mbp, " <wakeups %d/%d>",
+ mutexp->hybrid_wait, mutexp->hybrid_wakeup);
+#endif
+
+ if (LF_ISSET(DB_STAT_CLEAR))
+ __mutex_clear(env, mutex);
+}
+
+static const char *
+__mutex_print_id(alloc_id)
+ int alloc_id;
+{
+ switch (alloc_id) {
+ case MTX_APPLICATION: return ("application allocated");
+ case MTX_ATOMIC_EMULATION: return ("atomic emulation");
+ case MTX_DB_HANDLE: return ("db handle");
+ case MTX_ENV_DBLIST: return ("env dblist");
+ case MTX_ENV_HANDLE: return ("env handle");
+ case MTX_ENV_REGION: return ("env region");
+ case MTX_LOCK_REGION: return ("lock region");
+ case MTX_LOGICAL_LOCK: return ("logical lock");
+ case MTX_LOG_FILENAME: return ("log filename");
+ case MTX_LOG_FLUSH: return ("log flush");
+ case MTX_LOG_HANDLE: return ("log handle");
+ case MTX_LOG_REGION: return ("log region");
+ case MTX_MPOOLFILE_HANDLE: return ("mpoolfile handle");
+ case MTX_MPOOL_BH: return ("mpool buffer");
+ case MTX_MPOOL_FH: return ("mpool filehandle");
+ case MTX_MPOOL_FILE_BUCKET: return ("mpool file bucket");
+ case MTX_MPOOL_HANDLE: return ("mpool handle");
+ case MTX_MPOOL_HASH_BUCKET: return ("mpool hash bucket");
+ case MTX_MPOOL_REGION: return ("mpool region");
+ case MTX_MUTEX_REGION: return ("mutex region");
+ case MTX_MUTEX_TEST: return ("mutex test");
+ case MTX_REPMGR: return ("replication manager");
+ case MTX_REP_CHKPT: return ("replication checkpoint");
+ case MTX_REP_DATABASE: return ("replication database");
+ case MTX_REP_EVENT: return ("replication event");
+ case MTX_REP_REGION: return ("replication region");
+ case MTX_SEQUENCE: return ("sequence");
+ case MTX_TWISTER: return ("twister");
+ case MTX_TXN_ACTIVE: return ("txn active list");
+ case MTX_TXN_CHKPT: return ("transaction checkpoint");
+ case MTX_TXN_COMMIT: return ("txn commit");
+ case MTX_TXN_MVCC: return ("txn mvcc");
+ case MTX_TXN_REGION: return ("txn region");
+ default: return ("unknown mutex type");
+ /* NOTREACHED */
+ }
+}
+
+/*
+ * __mutex_set_wait_info --
+ * Return mutex statistics.
+ *
+ * PUBLIC: void __mutex_set_wait_info
+ * PUBLIC: __P((ENV *, db_mutex_t, uintmax_t *, uintmax_t *));
+ */
+void
+__mutex_set_wait_info(env, mutex, waitp, nowaitp)
+ ENV *env;
+ db_mutex_t mutex;
+ uintmax_t *waitp, *nowaitp;
+{
+ DB_MUTEX *mutexp;
+ DB_MUTEXMGR *mtxmgr;
+
+ mtxmgr = env->mutex_handle;
+ mutexp = MUTEXP_SET(mtxmgr, mutex);
+
+ *waitp = mutexp->mutex_set_wait;
+ *nowaitp = mutexp->mutex_set_nowait;
+}
+
+/*
+ * __mutex_clear --
+ * Clear mutex statistics.
+ *
+ * PUBLIC: void __mutex_clear __P((ENV *, db_mutex_t));
+ */
+void
+__mutex_clear(env, mutex)
+ ENV *env;
+ db_mutex_t mutex;
+{
+ DB_MUTEX *mutexp;
+ DB_MUTEXMGR *mtxmgr;
+
+ mtxmgr = env->mutex_handle;
+ mutexp = MUTEXP_SET(mtxmgr, mutex);
+
+ mutexp->mutex_set_wait = mutexp->mutex_set_nowait = 0;
+#ifdef HAVE_MUTEX_HYBRID
+ mutexp->hybrid_wait = mutexp->hybrid_wakeup = 0;
+#endif
+}
+
+#else /* !HAVE_STATISTICS */
+
+int
+__mutex_stat_pp(dbenv, statp, flags)
+ DB_ENV *dbenv;
+ DB_MUTEX_STAT **statp;
+ u_int32_t flags;
+{
+ COMPQUIET(statp, NULL);
+ COMPQUIET(flags, 0);
+
+ return (__db_stat_not_built(dbenv->env));
+}
+
+int
+__mutex_stat_print_pp(dbenv, flags)
+ DB_ENV *dbenv;
+ u_int32_t flags;
+{
+ COMPQUIET(flags, 0);
+
+ return (__db_stat_not_built(dbenv->env));
+}
+#endif
diff --git a/db-4.8.30/mutex/mut_stub.c b/db-4.8.30/mutex/mut_stub.c
new file mode 100644
index 0000000..d988e45
--- /dev/null
+++ b/db-4.8.30/mutex/mut_stub.c
@@ -0,0 +1,233 @@
+/*-
+ * See the file LICENSE for redistribution information.
+ *
+ * Copyright (c) 1996-2009 Oracle. All rights reserved.
+ *
+ * $Id$
+ */
+
+#ifndef HAVE_MUTEX_SUPPORT
+#include "db_config.h"
+
+#include "db_int.h"
+#include "dbinc/db_page.h"
+#include "dbinc/db_am.h"
+
+/*
+ * If the library wasn't compiled with mutex support, various routines
+ * aren't available. Stub them here, returning an appropriate error.
+ */
+static int __db_nomutex __P((ENV *));
+
+/*
+ * __db_nomutex --
+ * Error when a Berkeley DB build doesn't include mutexes.
+ */
+static int
+__db_nomutex(env)
+ ENV *env;
+{
+ __db_errx(env, "library build did not include support for mutexes");
+ return (DB_OPNOTSUP);
+}
+
+int
+__mutex_alloc_pp(dbenv, flags, indxp)
+ DB_ENV *dbenv;
+ u_int32_t flags;
+ db_mutex_t *indxp;
+{
+ COMPQUIET(flags, 0);
+ COMPQUIET(indxp, NULL);
+ return (__db_nomutex(dbenv->env));
+}
+
+int
+__mutex_alloc(env, alloc_id, flags, indxp)
+ ENV *env;
+ int alloc_id;
+ u_int32_t flags;
+ db_mutex_t *indxp;
+{
+ COMPQUIET(env, NULL);
+ COMPQUIET(alloc_id, 0);
+ COMPQUIET(flags, 0);
+ *indxp = MUTEX_INVALID;
+ return (0);
+}
+
+void
+__mutex_clear(env, mutex)
+ ENV *env;
+ db_mutex_t mutex;
+{
+ COMPQUIET(env, NULL);
+ COMPQUIET(mutex, MUTEX_INVALID);
+}
+
+int
+__mutex_free_pp(dbenv, indx)
+ DB_ENV *dbenv;
+ db_mutex_t indx;
+{
+ COMPQUIET(indx, 0);
+ return (__db_nomutex(dbenv->env));
+}
+
+int
+__mutex_free(env, indxp)
+ ENV *env;
+ db_mutex_t *indxp;
+{
+ COMPQUIET(env, NULL);
+ *indxp = MUTEX_INVALID;
+ return (0);
+}
+
+int
+__mutex_get_align(dbenv, alignp)
+ DB_ENV *dbenv;
+ u_int32_t *alignp;
+{
+ COMPQUIET(alignp, NULL);
+ return (__db_nomutex(dbenv->env));
+}
+
+int
+__mutex_get_increment(dbenv, incrementp)
+ DB_ENV *dbenv;
+ u_int32_t *incrementp;
+{
+ COMPQUIET(incrementp, NULL);
+ return (__db_nomutex(dbenv->env));
+}
+
+int
+__mutex_get_max(dbenv, maxp)
+ DB_ENV *dbenv;
+ u_int32_t *maxp;
+{
+ COMPQUIET(maxp, NULL);
+ return (__db_nomutex(dbenv->env));
+}
+
+int
+__mutex_get_tas_spins(dbenv, tas_spinsp)
+ DB_ENV *dbenv;
+ u_int32_t *tas_spinsp;
+{
+ COMPQUIET(tas_spinsp, NULL);
+ return (__db_nomutex(dbenv->env));
+}
+
+int
+__mutex_lock_pp(dbenv, indx)
+ DB_ENV *dbenv;
+ db_mutex_t indx;
+{
+ COMPQUIET(indx, 0);
+ return (__db_nomutex(dbenv->env));
+}
+
+void
+__mutex_print_debug_single(env, tag, mutex, flags)
+ ENV *env;
+ const char *tag;
+ db_mutex_t mutex;
+ u_int32_t flags;
+{
+ COMPQUIET(env, NULL);
+ COMPQUIET(tag, NULL);
+ COMPQUIET(mutex, MUTEX_INVALID);
+ COMPQUIET(flags, 0);
+}
+
+void
+__mutex_print_debug_stats(env, mbp, mutex, flags)
+ ENV *env;
+ DB_MSGBUF *mbp;
+ db_mutex_t mutex;
+ u_int32_t flags;
+{
+ COMPQUIET(env, NULL);
+ COMPQUIET(mbp, NULL);
+ COMPQUIET(mutex, MUTEX_INVALID);
+ COMPQUIET(flags, 0);
+}
+
+int
+__mutex_set_align(dbenv, align)
+ DB_ENV *dbenv;
+ u_int32_t align;
+{
+ COMPQUIET(align, 0);
+ return (__db_nomutex(dbenv->env));
+}
+
+int
+__mutex_set_increment(dbenv, increment)
+ DB_ENV *dbenv;
+ u_int32_t increment;
+{
+ COMPQUIET(increment, 0);
+ return (__db_nomutex(dbenv->env));
+}
+
+int
+__mutex_set_max(dbenv, max)
+ DB_ENV *dbenv;
+ u_int32_t max;
+{
+ COMPQUIET(max, 0);
+ return (__db_nomutex(dbenv->env));
+}
+
+int
+__mutex_set_tas_spins(dbenv, tas_spins)
+ DB_ENV *dbenv;
+ u_int32_t tas_spins;
+{
+ COMPQUIET(tas_spins, 0);
+ return (__db_nomutex(dbenv->env));
+}
+
+void
+__mutex_set_wait_info(env, mutex, waitp, nowaitp)
+ ENV *env;
+ db_mutex_t mutex;
+ uintmax_t *waitp, *nowaitp;
+{
+ COMPQUIET(env, NULL);
+ COMPQUIET(mutex, MUTEX_INVALID);
+ *waitp = *nowaitp = 0;
+}
+
+int
+__mutex_stat_pp(dbenv, statp, flags)
+ DB_ENV *dbenv;
+ DB_MUTEX_STAT **statp;
+ u_int32_t flags;
+{
+ COMPQUIET(statp, NULL);
+ COMPQUIET(flags, 0);
+ return (__db_nomutex(dbenv->env));
+}
+
+int
+__mutex_stat_print_pp(dbenv, flags)
+ DB_ENV *dbenv;
+ u_int32_t flags;
+{
+ COMPQUIET(flags, 0);
+ return (__db_nomutex(dbenv->env));
+}
+
+int
+__mutex_unlock_pp(dbenv, indx)
+ DB_ENV *dbenv;
+ db_mutex_t indx;
+{
+ COMPQUIET(indx, 0);
+ return (__db_nomutex(dbenv->env));
+}
+#endif /* !HAVE_MUTEX_SUPPORT */
diff --git a/db-4.8.30/mutex/mut_tas.c b/db-4.8.30/mutex/mut_tas.c
new file mode 100644
index 0000000..f3922e0
--- /dev/null
+++ b/db-4.8.30/mutex/mut_tas.c
@@ -0,0 +1,560 @@
+/*-
+ * See the file LICENSE for redistribution information.
+ *
+ * Copyright (c) 1996-2009 Oracle. All rights reserved.
+ *
+ * $Id$
+ */
+
+#include "db_config.h"
+
+#include "db_int.h"
+
+static inline int __db_tas_mutex_lock_int __P((ENV *, db_mutex_t, int));
+static inline int __db_tas_mutex_readlock_int __P((ENV *, db_mutex_t, int));
+
+/*
+ * __db_tas_mutex_init --
+ * Initialize a test-and-set mutex.
+ *
+ * PUBLIC: int __db_tas_mutex_init __P((ENV *, db_mutex_t, u_int32_t));
+ */
+int
+__db_tas_mutex_init(env, mutex, flags)
+ ENV *env;
+ db_mutex_t mutex;
+ u_int32_t flags;
+{
+ DB_ENV *dbenv;
+ DB_MUTEX *mutexp;
+ DB_MUTEXMGR *mtxmgr;
+ int ret;
+
+#ifndef HAVE_MUTEX_HYBRID
+ COMPQUIET(flags, 0);
+#endif
+
+ dbenv = env->dbenv;
+ mtxmgr = env->mutex_handle;
+ mutexp = MUTEXP_SET(mtxmgr, mutex);
+
+ /* Check alignment. */
+ if (((uintptr_t)mutexp & (dbenv->mutex_align - 1)) != 0) {
+ __db_errx(env, "TAS: mutex not appropriately aligned");
+ return (EINVAL);
+ }
+
+#ifdef HAVE_SHARED_LATCHES
+ if (F_ISSET(mutexp, DB_MUTEX_SHARED))
+ atomic_init(&mutexp->sharecount, 0);
+ else
+#endif
+ if (MUTEX_INIT(&mutexp->tas)) {
+ ret = __os_get_syserr();
+ __db_syserr(env, ret, "TAS: mutex initialize");
+ return (__os_posix_err(ret));
+ }
+#ifdef HAVE_MUTEX_HYBRID
+ if ((ret = __db_pthread_mutex_init(env,
+ mutex, flags | DB_MUTEX_SELF_BLOCK)) != 0)
+ return (ret);
+#endif
+ return (0);
+}
+
+/*
+ * __db_tas_mutex_lock_int
+ * Internal function to lock a mutex, or just try to lock it without waiting
+ */
+static inline int
+__db_tas_mutex_lock_int(env, mutex, nowait)
+ ENV *env;
+ db_mutex_t mutex;
+ int nowait;
+{
+ DB_ENV *dbenv;
+ DB_MUTEX *mutexp;
+ DB_MUTEXMGR *mtxmgr;
+ DB_MUTEXREGION *mtxregion;
+ DB_THREAD_INFO *ip;
+ u_int32_t nspins;
+ int ret;
+#ifndef HAVE_MUTEX_HYBRID
+ u_long ms, max_ms;
+#endif
+
+ dbenv = env->dbenv;
+
+ if (!MUTEX_ON(env) || F_ISSET(dbenv, DB_ENV_NOLOCKING))
+ return (0);
+
+ mtxmgr = env->mutex_handle;
+ mtxregion = mtxmgr->reginfo.primary;
+ mutexp = MUTEXP_SET(mtxmgr, mutex);
+
+ CHECK_MTX_THREAD(env, mutexp);
+
+#ifdef HAVE_STATISTICS
+ if (F_ISSET(mutexp, DB_MUTEX_LOCKED))
+ ++mutexp->mutex_set_wait;
+ else
+ ++mutexp->mutex_set_nowait;
+#endif
+
+#ifndef HAVE_MUTEX_HYBRID
+ /*
+ * Wait 1ms initially, up to 10ms for mutexes backing logical database
+ * locks, and up to 25 ms for mutual exclusion data structure mutexes.
+ * SR: #7675
+ */
+ ms = 1;
+ max_ms = F_ISSET(mutexp, DB_MUTEX_LOGICAL_LOCK) ? 10 : 25;
+#endif
+
+ /*
+ * Only check the thread state once, by initializing the thread
+ * control block pointer to null. If it is not the failchk
+ * thread, then ip will have a valid value subsequent times
+ * in the loop.
+ */
+ ip = NULL;
+
+loop: /* Attempt to acquire the resource for N spins. */
+ for (nspins =
+ mtxregion->stat.st_mutex_tas_spins; nspins > 0; --nspins) {
+#ifdef HAVE_MUTEX_S390_CC_ASSEMBLY
+ tsl_t zero;
+
+ zero = 0;
+#endif
+
+ dbenv = env->dbenv;
+
+#ifdef HAVE_MUTEX_HPPA_MSEM_INIT
+ relock:
+#endif
+ /*
+ * Avoid interlocked instructions until they're likely to
+ * succeed by first checking whether it is held
+ */
+ if (MUTEXP_IS_BUSY(mutexp) || !MUTEXP_ACQUIRE(mutexp)) {
+ if (F_ISSET(dbenv, DB_ENV_FAILCHK) &&
+ ip == NULL && dbenv->is_alive(dbenv,
+ mutexp->pid, mutexp->tid, 0) == 0) {
+ ret = __env_set_state(env, &ip, THREAD_VERIFY);
+ if (ret != 0 ||
+ ip->dbth_state == THREAD_FAILCHK)
+ return (DB_RUNRECOVERY);
+ }
+ if (nowait)
+ return (DB_LOCK_NOTGRANTED);
+ /*
+ * Some systems (notably those with newer Intel CPUs)
+ * need a small pause here. [#6975]
+ */
+ MUTEX_PAUSE
+ continue;
+ }
+
+ MEMBAR_ENTER();
+
+#ifdef HAVE_MUTEX_HPPA_MSEM_INIT
+ /*
+ * HP semaphores are unlocked automatically when a holding
+ * process exits. If the mutex appears to be locked
+ * (F_ISSET(DB_MUTEX_LOCKED)) but we got here, assume this
+ * has happened. Set the pid and tid into the mutex and
+ * lock again. (The default state of the mutexes used to
+ * block in __lock_get_internal is locked, so exiting with
+ * a locked mutex is reasonable behavior for a process that
+ * happened to initialize or use one of them.)
+ */
+ if (F_ISSET(mutexp, DB_MUTEX_LOCKED)) {
+ dbenv->thread_id(dbenv, &mutexp->pid, &mutexp->tid);
+ goto relock;
+ }
+ /*
+ * If we make it here, the mutex isn't locked, the diagnostic
+ * won't fire, and we were really unlocked by someone calling
+ * the DB mutex unlock function.
+ */
+#endif
+#ifdef DIAGNOSTIC
+ if (F_ISSET(mutexp, DB_MUTEX_LOCKED)) {
+ char buf[DB_THREADID_STRLEN];
+ __db_errx(env,
+ "TAS lock failed: lock %d currently in use: ID: %s",
+ mutex, dbenv->thread_id_string(dbenv,
+ mutexp->pid, mutexp->tid, buf));
+ return (__env_panic(env, EACCES));
+ }
+#endif
+ F_SET(mutexp, DB_MUTEX_LOCKED);
+ dbenv->thread_id(dbenv, &mutexp->pid, &mutexp->tid);
+
+#ifdef DIAGNOSTIC
+ /*
+ * We want to switch threads as often as possible. Yield
+ * every time we get a mutex to ensure contention.
+ */
+ if (F_ISSET(dbenv, DB_ENV_YIELDCPU))
+ __os_yield(env, 0, 0);
+#endif
+ return (0);
+ }
+
+ /* Wait for the lock to become available. */
+#ifdef HAVE_MUTEX_HYBRID
+ /*
+ * By yielding here we can get the other thread to give up the
+ * mutex before calling the more expensive library mutex call.
+ * Tests have shown this to be a big win when there is contention.
+ * With shared latches check the locked bit only after checking
+ * that no one has the latch in shared mode.
+ */
+ __os_yield(env, 0, 0);
+ if (!MUTEXP_IS_BUSY(mutexp))
+ goto loop;
+ if ((ret = __db_pthread_mutex_lock(env, mutex)) != 0)
+ return (ret);
+#else
+ __os_yield(env, 0, ms * US_PER_MS);
+ if ((ms <<= 1) > max_ms)
+ ms = max_ms;
+#endif
+
+ /*
+ * We're spinning. The environment might be hung, and somebody else
+ * has already recovered it. The first thing recovery does is panic
+ * the environment. Check to see if we're never going to get this
+ * mutex.
+ */
+ PANIC_CHECK(env);
+
+ goto loop;
+}
+
+/*
+ * __db_tas_mutex_lock
+ * Lock on a mutex, blocking if necessary.
+ *
+ * PUBLIC: int __db_tas_mutex_lock __P((ENV *, db_mutex_t));
+ */
+int
+__db_tas_mutex_lock(env, mutex)
+ ENV *env;
+ db_mutex_t mutex;
+{
+ return (__db_tas_mutex_lock_int(env, mutex, 0));
+}
+
+/*
+ * __db_tas_mutex_trylock
+ * Try to exclusively lock a mutex without ever blocking - ever!
+ *
+ * Returns 0 on success,
+ * DB_LOCK_NOTGRANTED on timeout
+ * Possibly DB_RUNRECOVERY if DB_ENV_FAILCHK or panic.
+ *
+ * This will work for DB_MUTEX_SHARED, though it always tries
+ * for exclusive access.
+ *
+ * PUBLIC: int __db_tas_mutex_trylock __P((ENV *, db_mutex_t));
+ */
+int
+__db_tas_mutex_trylock(env, mutex)
+ ENV *env;
+ db_mutex_t mutex;
+{
+ return (__db_tas_mutex_lock_int(env, mutex, 1));
+}
+
+#if defined(HAVE_SHARED_LATCHES)
+/*
+ * __db_tas_mutex_readlock_int
+ * Internal function to get a shared lock on a latch, blocking if necessary.
+ *
+ */
+static inline int
+__db_tas_mutex_readlock_int(env, mutex, nowait)
+ ENV *env;
+ db_mutex_t mutex;
+ int nowait;
+{
+ DB_ENV *dbenv;
+ DB_MUTEX *mutexp;
+ DB_MUTEXMGR *mtxmgr;
+ DB_MUTEXREGION *mtxregion;
+ DB_THREAD_INFO *ip;
+ int lock;
+ u_int32_t nspins;
+ int ret;
+#ifndef HAVE_MUTEX_HYBRID
+ u_long ms, max_ms;
+#endif
+ dbenv = env->dbenv;
+
+ if (!MUTEX_ON(env) || F_ISSET(dbenv, DB_ENV_NOLOCKING))
+ return (0);
+
+ mtxmgr = env->mutex_handle;
+ mtxregion = mtxmgr->reginfo.primary;
+ mutexp = MUTEXP_SET(mtxmgr, mutex);
+
+ CHECK_MTX_THREAD(env, mutexp);
+
+ DB_ASSERT(env, F_ISSET(mutexp, DB_MUTEX_SHARED));
+#ifdef HAVE_STATISTICS
+ if (F_ISSET(mutexp, DB_MUTEX_LOCKED))
+ ++mutexp->mutex_set_rd_wait;
+ else
+ ++mutexp->mutex_set_rd_nowait;
+#endif
+
+#ifndef HAVE_MUTEX_HYBRID
+ /*
+ * Wait 1ms initially, up to 10ms for mutexes backing logical database
+ * locks, and up to 25 ms for mutual exclusion data structure mutexes.
+ * SR: #7675
+ */
+ ms = 1;
+ max_ms = F_ISSET(mutexp, DB_MUTEX_LOGICAL_LOCK) ? 10 : 25;
+#endif
+ /*
+ * Only check the thread state once, by initializing the thread
+ * control block pointer to null. If it is not the failchk
+ * thread, then ip will have a valid value subsequent times
+ * in the loop.
+ */
+ ip = NULL;
+
+loop: /* Attempt to acquire the resource for N spins. */
+ for (nspins =
+ mtxregion->stat.st_mutex_tas_spins; nspins > 0; --nspins) {
+ lock = atomic_read(&mutexp->sharecount);
+ if (lock == MUTEX_SHARE_ISEXCLUSIVE ||
+ !atomic_compare_exchange(env,
+ &mutexp->sharecount, lock, lock + 1)) {
+ if (F_ISSET(dbenv, DB_ENV_FAILCHK) &&
+ ip == NULL && dbenv->is_alive(dbenv,
+ mutexp->pid, mutexp->tid, 0) == 0) {
+ ret = __env_set_state(env, &ip, THREAD_VERIFY);
+ if (ret != 0 ||
+ ip->dbth_state == THREAD_FAILCHK)
+ return (DB_RUNRECOVERY);
+ }
+ if (nowait)
+ return (DB_LOCK_NOTGRANTED);
+ /*
+ * Some systems (notably those with newer Intel CPUs)
+ * need a small pause here. [#6975]
+ */
+ MUTEX_PAUSE
+ continue;
+ }
+
+ MEMBAR_ENTER();
+ /* For shared lactches the threadid is the last requestor's id.
+ */
+ dbenv->thread_id(dbenv, &mutexp->pid, &mutexp->tid);
+
+ return (0);
+ }
+
+ /* Wait for the lock to become available. */
+#ifdef HAVE_MUTEX_HYBRID
+ /*
+ * By yielding here we can get the other thread to give up the
+ * mutex before calling the more expensive library mutex call.
+ * Tests have shown this to be a big win when there is contention.
+ */
+ __os_yield(env, 0, 0);
+ if (atomic_read(&mutexp->sharecount) != MUTEX_SHARE_ISEXCLUSIVE)
+ goto loop;
+ if ((ret = __db_pthread_mutex_lock(env, mutex)) != 0)
+ return (ret);
+#else
+ __os_yield(env, 0, ms * US_PER_MS);
+ if ((ms <<= 1) > max_ms)
+ ms = max_ms;
+#endif
+
+ /*
+ * We're spinning. The environment might be hung, and somebody else
+ * has already recovered it. The first thing recovery does is panic
+ * the environment. Check to see if we're never going to get this
+ * mutex.
+ */
+ PANIC_CHECK(env);
+
+ goto loop;
+}
+
+/*
+ * __db_tas_mutex_readlock
+ * Get a shared lock on a latch, waiting if necessary.
+ *
+ * PUBLIC: #if defined(HAVE_SHARED_LATCHES)
+ * PUBLIC: int __db_tas_mutex_readlock __P((ENV *, db_mutex_t));
+ * PUBLIC: #endif
+ */
+int
+__db_tas_mutex_readlock(env, mutex)
+ ENV *env;
+ db_mutex_t mutex;
+{
+ return (__db_tas_mutex_readlock_int(env, mutex, 0));
+}
+
+/*
+ * __db_tas_mutex_tryreadlock
+ * Try to get a shared lock on a latch; don't wait when busy.
+ *
+ * PUBLIC: #if defined(HAVE_SHARED_LATCHES)
+ * PUBLIC: int __db_tas_mutex_tryreadlock __P((ENV *, db_mutex_t));
+ * PUBLIC: #endif
+ */
+int
+__db_tas_mutex_tryreadlock(env, mutex)
+ ENV *env;
+ db_mutex_t mutex;
+{
+ return (__db_tas_mutex_readlock_int(env, mutex, 1));
+}
+#endif
+
+/*
+ * __db_tas_mutex_unlock --
+ * Release a mutex.
+ *
+ * PUBLIC: int __db_tas_mutex_unlock __P((ENV *, db_mutex_t));
+ *
+ * Hybrid shared latch wakeup
+ * When an exclusive requester waits for the last shared holder to
+ * release, it increments mutexp->wait and pthread_cond_wait()'s. The
+ * last shared unlock calls __db_pthread_mutex_unlock() to wake it.
+ */
+int
+__db_tas_mutex_unlock(env, mutex)
+ ENV *env;
+ db_mutex_t mutex;
+{
+ DB_ENV *dbenv;
+ DB_MUTEX *mutexp;
+ DB_MUTEXMGR *mtxmgr;
+#ifdef HAVE_MUTEX_HYBRID
+ int ret;
+#ifdef MUTEX_DIAG
+ int waiters;
+#endif
+#endif
+#ifdef HAVE_SHARED_LATCHES
+ int sharecount;
+#endif
+ dbenv = env->dbenv;
+
+ if (!MUTEX_ON(env) || F_ISSET(dbenv, DB_ENV_NOLOCKING))
+ return (0);
+
+ mtxmgr = env->mutex_handle;
+ mutexp = MUTEXP_SET(mtxmgr, mutex);
+#if defined(HAVE_MUTEX_HYBRID) && defined(MUTEX_DIAG)
+ waiters = mutexp->wait;
+#endif
+
+#if defined(DIAGNOSTIC)
+#if defined(HAVE_SHARED_LATCHES)
+ if (F_ISSET(mutexp, DB_MUTEX_SHARED)) {
+ if (atomic_read(&mutexp->sharecount) == 0) {
+ __db_errx(env, "shared unlock %d already unlocked",
+ mutex);
+ return (__env_panic(env, EACCES));
+ }
+ } else
+#endif
+ if (!F_ISSET(mutexp, DB_MUTEX_LOCKED)) {
+ __db_errx(env, "unlock %d already unlocked", mutex);
+ return (__env_panic(env, EACCES));
+ }
+#endif
+
+#ifdef HAVE_SHARED_LATCHES
+ if (F_ISSET(mutexp, DB_MUTEX_SHARED)) {
+ sharecount = atomic_read(&mutexp->sharecount);
+ /*MUTEX_MEMBAR(mutexp->sharecount);*/ /* XXX why? */
+ if (sharecount == MUTEX_SHARE_ISEXCLUSIVE) {
+ F_CLR(mutexp, DB_MUTEX_LOCKED);
+ /* Flush flag update before zeroing count */
+ MEMBAR_EXIT();
+ atomic_init(&mutexp->sharecount, 0);
+ } else {
+ DB_ASSERT(env, sharecount > 0);
+ MEMBAR_EXIT();
+ sharecount = atomic_dec(env, &mutexp->sharecount);
+ DB_ASSERT(env, sharecount >= 0);
+ if (sharecount > 0)
+ return (0);
+ }
+ } else
+#endif
+ {
+ F_CLR(mutexp, DB_MUTEX_LOCKED);
+ MUTEX_UNSET(&mutexp->tas);
+ }
+
+#ifdef HAVE_MUTEX_HYBRID
+#ifdef DIAGNOSTIC
+ if (F_ISSET(dbenv, DB_ENV_YIELDCPU))
+ __os_yield(env, 0, 0);
+#endif
+
+ /* Prevent the load of wait from being hoisted before MUTEX_UNSET */
+ MUTEX_MEMBAR(mutexp->flags);
+ if (mutexp->wait &&
+ (ret = __db_pthread_mutex_unlock(env, mutex)) != 0)
+ return (ret);
+
+#ifdef MUTEX_DIAG
+ if (mutexp->wait)
+ printf("tas_unlock %d %x waiters! busy %x waiters %d/%d\n",
+ mutex, pthread_self(),
+ MUTEXP_BUSY_FIELD(mutexp), waiters, mutexp->wait);
+#endif
+#endif
+
+ return (0);
+}
+
+/*
+ * __db_tas_mutex_destroy --
+ * Destroy a mutex.
+ *
+ * PUBLIC: int __db_tas_mutex_destroy __P((ENV *, db_mutex_t));
+ */
+int
+__db_tas_mutex_destroy(env, mutex)
+ ENV *env;
+ db_mutex_t mutex;
+{
+ DB_MUTEX *mutexp;
+ DB_MUTEXMGR *mtxmgr;
+#ifdef HAVE_MUTEX_HYBRID
+ int ret;
+#endif
+
+ if (!MUTEX_ON(env))
+ return (0);
+
+ mtxmgr = env->mutex_handle;
+ mutexp = MUTEXP_SET(mtxmgr, mutex);
+
+ MUTEX_DESTROY(&mutexp->tas);
+
+#ifdef HAVE_MUTEX_HYBRID
+ if ((ret = __db_pthread_mutex_destroy(env, mutex)) != 0)
+ return (ret);
+#endif
+
+ COMPQUIET(mutexp, NULL); /* MUTEX_DESTROY may not be defined. */
+ return (0);
+}
diff --git a/db-4.8.30/mutex/mut_win32.c b/db-4.8.30/mutex/mut_win32.c
new file mode 100644
index 0000000..20987a1
--- /dev/null
+++ b/db-4.8.30/mutex/mut_win32.c
@@ -0,0 +1,540 @@
+/*
+ * See the file LICENSE for redistribution information.
+ *
+ * Copyright (c) 2002-2009 Oracle. All rights reserved.
+ *
+ * $Id$
+ */
+
+#include "db_config.h"
+
+#define LOAD_ACTUAL_MUTEX_CODE
+#include "db_int.h"
+
+#include "dbinc/atomic.h"
+/*
+ * This is where we load in the actual test-and-set mutex code.
+ */
+#include "dbinc/mutex_int.h"
+
+/* We don't want to run this code even in "ordinary" diagnostic mode. */
+#undef MUTEX_DIAG
+
+/*
+ * Common code to get an event handle. This is executed whenever a mutex
+ * blocks, or when unlocking a mutex that a thread is waiting on. We can't
+ * keep these handles around, since the mutex structure is in shared memory,
+ * and each process gets its own handle value.
+ *
+ * We pass security attributes so that the created event is accessible by all
+ * users, in case a Windows service is sharing an environment with a local
+ * process run as a different user.
+ */
+static _TCHAR hex_digits[] = _T("0123456789abcdef");
+static SECURITY_DESCRIPTOR null_sd;
+static SECURITY_ATTRIBUTES all_sa;
+static int security_initialized = 0;
+
+static __inline int get_handle(env, mutexp, eventp)
+ ENV *env;
+ DB_MUTEX *mutexp;
+ HANDLE *eventp;
+{
+ _TCHAR idbuf[] = _T("db.m00000000");
+ _TCHAR *p = idbuf + 12;
+ int ret = 0;
+ u_int32_t id;
+
+ for (id = (mutexp)->id; id != 0; id >>= 4)
+ *--p = hex_digits[id & 0xf];
+
+#ifndef DB_WINCE
+ if (!security_initialized) {
+ InitializeSecurityDescriptor(&null_sd,
+ SECURITY_DESCRIPTOR_REVISION);
+ SetSecurityDescriptorDacl(&null_sd, TRUE, 0, FALSE);
+ all_sa.nLength = sizeof(SECURITY_ATTRIBUTES);
+ all_sa.bInheritHandle = FALSE;
+ all_sa.lpSecurityDescriptor = &null_sd;
+ security_initialized = 1;
+ }
+#endif
+
+ if ((*eventp = CreateEvent(&all_sa, FALSE, FALSE, idbuf)) == NULL) {
+ ret = __os_get_syserr();
+ __db_syserr(env, ret, "Win32 create event failed");
+ }
+
+ return (ret);
+}
+
+/*
+ * __db_win32_mutex_lock_int
+ * Internal function to lock a win32 mutex
+ *
+ * If the wait paramter is 0, this function will return DB_LOCK_NOTGRANTED
+ * rather than wait.
+ *
+ */
+static __inline int
+__db_win32_mutex_lock_int(env, mutex, wait)
+ ENV *env;
+ db_mutex_t mutex;
+ int wait;
+{
+ DB_ENV *dbenv;
+ DB_MUTEX *mutexp;
+ DB_MUTEXMGR *mtxmgr;
+ DB_MUTEXREGION *mtxregion;
+ DB_THREAD_INFO *ip;
+ HANDLE event;
+ u_int32_t nspins;
+ int ms, ret;
+#ifdef MUTEX_DIAG
+ LARGE_INTEGER now;
+#endif
+ dbenv = env->dbenv;
+
+ if (!MUTEX_ON(env) || F_ISSET(dbenv, DB_ENV_NOLOCKING))
+ return (0);
+
+ mtxmgr = env->mutex_handle;
+ mtxregion = mtxmgr->reginfo.primary;
+ mutexp = MUTEXP_SET(mtxmgr, mutex);
+
+ CHECK_MTX_THREAD(env, mutexp);
+
+ /*
+ * See WINCE_ATOMIC_MAGIC definition for details.
+ * Use sharecount, because the value just needs to be a db_atomic_t
+ * memory mapped onto the same page as those being Interlocked*.
+ */
+ WINCE_ATOMIC_MAGIC(&mutexp->sharecount);
+
+ event = NULL;
+ ms = 50;
+ ret = 0;
+
+ /*
+ * Only check the thread state once, by initializing the thread
+ * control block pointer to null. If it is not the failchk
+ * thread, then ip will have a valid value subsequent times
+ * in the loop.
+ */
+ ip = NULL;
+
+loop: /* Attempt to acquire the mutex mutex_tas_spins times, if waiting. */
+ for (nspins =
+ mtxregion->stat.st_mutex_tas_spins; nspins > 0; --nspins) {
+ /*
+ * We can avoid the (expensive) interlocked instructions if
+ * the mutex is already busy.
+ */
+ if (MUTEXP_IS_BUSY(mutexp) || !MUTEXP_ACQUIRE(mutexp)) {
+ if (F_ISSET(dbenv, DB_ENV_FAILCHK) &&
+ ip == NULL && dbenv->is_alive(dbenv,
+ mutexp->pid, mutexp->tid, 0) == 0) {
+ ret = __env_set_state(env, &ip, THREAD_VERIFY);
+ if (ret != 0 ||
+ ip->dbth_state == THREAD_FAILCHK)
+ return (DB_RUNRECOVERY);
+ }
+ if (!wait)
+ return (DB_LOCK_NOTGRANTED);
+ /*
+ * Some systems (notably those with newer Intel CPUs)
+ * need a small pause before retrying. [#6975]
+ */
+ MUTEX_PAUSE
+ continue;
+ }
+
+#ifdef DIAGNOSTIC
+ if (F_ISSET(mutexp, DB_MUTEX_LOCKED)) {
+ char buf[DB_THREADID_STRLEN];
+ __db_errx(env,
+ "Win32 lock failed: mutex already locked by %s",
+ dbenv->thread_id_string(dbenv,
+ mutexp->pid, mutexp->tid, buf));
+ return (__env_panic(env, EACCES));
+ }
+#endif
+ F_SET(mutexp, DB_MUTEX_LOCKED);
+ dbenv->thread_id(dbenv, &mutexp->pid, &mutexp->tid);
+
+#ifdef HAVE_STATISTICS
+ if (event == NULL)
+ ++mutexp->mutex_set_nowait;
+ else
+ ++mutexp->mutex_set_wait;
+#endif
+ if (event != NULL) {
+ CloseHandle(event);
+ InterlockedDecrement(&mutexp->nwaiters);
+#ifdef MUTEX_DIAG
+ if (ret != WAIT_OBJECT_0) {
+ QueryPerformanceCounter(&now);
+ printf("[%I64d]: Lost signal on mutex %p, "
+ "id %d, ms %d\n",
+ now.QuadPart, mutexp, mutexp->id, ms);
+ }
+#endif
+ }
+
+#ifdef DIAGNOSTIC
+ /*
+ * We want to switch threads as often as possible. Yield
+ * every time we get a mutex to ensure contention.
+ */
+ if (F_ISSET(dbenv, DB_ENV_YIELDCPU))
+ __os_yield(env, 0, 0);
+#endif
+
+ return (0);
+ }
+
+ /*
+ * Yield the processor; wait 50 ms initially, up to 1 second. This
+ * loop is needed to work around a race where the signal from the
+ * unlocking thread gets lost. We start at 50 ms because it's unlikely
+ * to happen often and we want to avoid wasting CPU.
+ */
+ if (event == NULL) {
+#ifdef MUTEX_DIAG
+ QueryPerformanceCounter(&now);
+ printf("[%I64d]: Waiting on mutex %p, id %d\n",
+ now.QuadPart, mutexp, mutexp->id);
+#endif
+ InterlockedIncrement(&mutexp->nwaiters);
+ if ((ret = get_handle(env, mutexp, &event)) != 0)
+ goto err;
+ }
+ if ((ret = WaitForSingleObject(event, ms)) == WAIT_FAILED) {
+ ret = __os_get_syserr();
+ goto err;
+ }
+ if ((ms <<= 1) > MS_PER_SEC)
+ ms = MS_PER_SEC;
+
+ PANIC_CHECK(env);
+ goto loop;
+
+err: __db_syserr(env, ret, "Win32 lock failed");
+ return (__env_panic(env, __os_posix_err(ret)));
+}
+
+/*
+ * __db_win32_mutex_init --
+ * Initialize a Win32 mutex.
+ *
+ * PUBLIC: int __db_win32_mutex_init __P((ENV *, db_mutex_t, u_int32_t));
+ */
+int
+__db_win32_mutex_init(env, mutex, flags)
+ ENV *env;
+ db_mutex_t mutex;
+ u_int32_t flags;
+{
+ DB_MUTEX *mutexp;
+
+ mutexp = MUTEXP_SET(env->mutex_handle, mutex);
+ mutexp->id = ((getpid() & 0xffff) << 16) ^ P_TO_UINT32(mutexp);
+ F_SET(mutexp, flags);
+
+ return (0);
+}
+
+/*
+ * __db_win32_mutex_lock
+ * Lock on a mutex, blocking if necessary.
+ *
+ * PUBLIC: int __db_win32_mutex_lock __P((ENV *, db_mutex_t));
+ */
+int
+__db_win32_mutex_lock(env, mutex)
+ ENV *env;
+ db_mutex_t mutex;
+{
+ return (__db_win32_mutex_lock_int(env, mutex, 1));
+}
+
+/*
+ * __db_win32_mutex_trylock
+ * Try to lock a mutex, returning without waiting if it is busy
+ *
+ * PUBLIC: int __db_win32_mutex_trylock __P((ENV *, db_mutex_t));
+ */
+int
+__db_win32_mutex_trylock(env, mutex)
+ ENV *env;
+ db_mutex_t mutex;
+{
+ return (__db_win32_mutex_lock_int(env, mutex, 0));
+}
+
+#if defined(HAVE_SHARED_LATCHES)
+/*
+ * __db_win32_mutex_readlock_int
+ * Try to lock a mutex, possibly waiting if requested and necessary.
+ */
+int
+__db_win32_mutex_readlock_int(env, mutex, nowait)
+ ENV *env;
+ db_mutex_t mutex;
+ int nowait;
+{
+ DB_ENV *dbenv;
+ DB_MUTEX *mutexp;
+ DB_MUTEXMGR *mtxmgr;
+ DB_MUTEXREGION *mtxregion;
+ HANDLE event;
+ u_int32_t nspins;
+ int ms, ret;
+ long exch_ret, mtx_val;
+#ifdef MUTEX_DIAG
+ LARGE_INTEGER now;
+#endif
+ dbenv = env->dbenv;
+
+ if (!MUTEX_ON(env) || F_ISSET(dbenv, DB_ENV_NOLOCKING))
+ return (0);
+
+ mtxmgr = env->mutex_handle;
+ mtxregion = mtxmgr->reginfo.primary;
+ mutexp = MUTEXP_SET(mtxmgr, mutex);
+
+ CHECK_MTX_THREAD(env, mutexp);
+
+ /*
+ * See WINCE_ATOMIC_MAGIC definition for details.
+ * Use sharecount, because the value just needs to be a db_atomic_t
+ * memory mapped onto the same page as those being Interlocked*.
+ */
+ WINCE_ATOMIC_MAGIC(&mutexp->sharecount);
+
+ event = NULL;
+ ms = 50;
+ ret = 0;
+ /*
+ * This needs to be initialized, since if mutexp->tas
+ * is write locked on the first pass, it needs a value.
+ */
+ exch_ret = 0;
+
+loop: /* Attempt to acquire the resource for N spins. */
+ for (nspins =
+ mtxregion->stat.st_mutex_tas_spins; nspins > 0; --nspins) {
+ /*
+ * We can avoid the (expensive) interlocked instructions if
+ * the mutex is already "set".
+ */
+retry: mtx_val = atomic_read(&mutexp->sharecount);
+ if (mtx_val == MUTEX_SHARE_ISEXCLUSIVE) {
+ if (nowait)
+ return (DB_LOCK_NOTGRANTED);
+
+ continue;
+ } else if (!atomic_compare_exchange(env, &mutexp->sharecount,
+ mtx_val, mtx_val + 1)) {
+ /*
+ * Some systems (notably those with newer Intel CPUs)
+ * need a small pause here. [#6975]
+ */
+ MUTEX_PAUSE
+ goto retry;
+ }
+
+#ifdef HAVE_STATISTICS
+ if (event == NULL)
+ ++mutexp->mutex_set_rd_nowait;
+ else
+ ++mutexp->mutex_set_rd_wait;
+#endif
+ if (event != NULL) {
+ CloseHandle(event);
+ InterlockedDecrement(&mutexp->nwaiters);
+#ifdef MUTEX_DIAG
+ if (ret != WAIT_OBJECT_0) {
+ QueryPerformanceCounter(&now);
+ printf("[%I64d]: Lost signal on mutex %p, "
+ "id %d, ms %d\n",
+ now.QuadPart, mutexp, mutexp->id, ms);
+ }
+#endif
+ }
+
+#ifdef DIAGNOSTIC
+ /*
+ * We want to switch threads as often as possible. Yield
+ * every time we get a mutex to ensure contention.
+ */
+ if (F_ISSET(dbenv, DB_ENV_YIELDCPU))
+ __os_yield(env, 0, 0);
+#endif
+
+ return (0);
+ }
+
+ /*
+ * Yield the processor; wait 50 ms initially, up to 1 second. This
+ * loop is needed to work around a race where the signal from the
+ * unlocking thread gets lost. We start at 50 ms because it's unlikely
+ * to happen often and we want to avoid wasting CPU.
+ */
+ if (event == NULL) {
+#ifdef MUTEX_DIAG
+ QueryPerformanceCounter(&now);
+ printf("[%I64d]: Waiting on mutex %p, id %d\n",
+ now.QuadPart, mutexp, mutexp->id);
+#endif
+ InterlockedIncrement(&mutexp->nwaiters);
+ if ((ret = get_handle(env, mutexp, &event)) != 0)
+ goto err;
+ }
+ if ((ret = WaitForSingleObject(event, ms)) == WAIT_FAILED) {
+ ret = __os_get_syserr();
+ goto err;
+ }
+ if ((ms <<= 1) > MS_PER_SEC)
+ ms = MS_PER_SEC;
+
+ PANIC_CHECK(env);
+ goto loop;
+
+err: __db_syserr(env, ret, "Win32 read lock failed");
+ return (__env_panic(env, __os_posix_err(ret)));
+}
+
+/*
+ * __db_win32_mutex_readlock
+ * Get a shared lock on a latch
+ *
+ * PUBLIC: #if defined(HAVE_SHARED_LATCHES)
+ * PUBLIC: int __db_win32_mutex_readlock __P((ENV *, db_mutex_t));
+ * PUBLIC: #endif
+ */
+int
+__db_win32_mutex_readlock(env, mutex)
+ ENV *env;
+ db_mutex_t mutex;
+{
+ return (__db_win32_mutex_readlock_int(env, mutex, 0));
+}
+
+/*
+ * __db_win32_mutex_tryreadlock
+ * Try to a shared lock on a latch
+ *
+ * PUBLIC: #if defined(HAVE_SHARED_LATCHES)
+ * PUBLIC: int __db_win32_mutex_tryreadlock __P((ENV *, db_mutex_t));
+ * PUBLIC: #endif
+ */
+int
+__db_win32_mutex_tryreadlock(env, mutex)
+ ENV *env;
+ db_mutex_t mutex;
+{
+ return (__db_win32_mutex_readlock_int(env, mutex, 1));
+}
+#endif
+
+/*
+ * __db_win32_mutex_unlock --
+ * Release a mutex.
+ *
+ * PUBLIC: int __db_win32_mutex_unlock __P((ENV *, db_mutex_t));
+ */
+int
+__db_win32_mutex_unlock(env, mutex)
+ ENV *env;
+ db_mutex_t mutex;
+{
+ DB_ENV *dbenv;
+ DB_MUTEX *mutexp;
+ DB_MUTEXMGR *mtxmgr;
+ HANDLE event;
+ int ret;
+#ifdef MUTEX_DIAG
+ LARGE_INTEGER now;
+#endif
+ dbenv = env->dbenv;
+
+ if (!MUTEX_ON(env) || F_ISSET(dbenv, DB_ENV_NOLOCKING))
+ return (0);
+
+ mtxmgr = env->mutex_handle;
+ mutexp = MUTEXP_SET(mtxmgr, mutex);
+
+#ifdef DIAGNOSTIC
+ if (!MUTEXP_IS_BUSY(mutexp) || !(F_ISSET(mutexp, DB_MUTEX_SHARED) ||
+ F_ISSET(mutexp, DB_MUTEX_LOCKED))) {
+ __db_errx(env,
+ "Win32 unlock failed: lock already unlocked: mutex %d busy %d",
+ mutex, MUTEXP_BUSY_FIELD(mutexp));
+ return (__env_panic(env, EACCES));
+ }
+#endif
+ /*
+ * If we have a shared latch, and a read lock (DB_MUTEX_LOCKED is only
+ * set for write locks), then decrement the latch. If the readlock is
+ * still held by other threads, just return. Otherwise go ahead and
+ * notify any waiting threads.
+ */
+#ifdef HAVE_SHARED_LATCHES
+ if (F_ISSET(mutexp, DB_MUTEX_SHARED)) {
+ if (F_ISSET(mutexp, DB_MUTEX_LOCKED)) {
+ F_CLR(mutexp, DB_MUTEX_LOCKED);
+ if ((ret = InterlockedExchange(
+ (interlocked_val)(&atomic_read(
+ &mutexp->sharecount)), 0)) !=
+ MUTEX_SHARE_ISEXCLUSIVE) {
+ ret = DB_RUNRECOVERY;
+ goto err;
+ }
+ } else if (InterlockedDecrement(
+ (interlocked_val)(&atomic_read(&mutexp->sharecount))) > 0)
+ return (0);
+ } else
+#endif
+ {
+ F_CLR(mutexp, DB_MUTEX_LOCKED);
+ MUTEX_UNSET(&mutexp->tas);
+ }
+
+ if (mutexp->nwaiters > 0) {
+ if ((ret = get_handle(env, mutexp, &event)) != 0)
+ goto err;
+
+#ifdef MUTEX_DIAG
+ QueryPerformanceCounter(&now);
+ printf("[%I64d]: Signalling mutex %p, id %d\n",
+ now.QuadPart, mutexp, mutexp->id);
+#endif
+ if (!PulseEvent(event)) {
+ ret = __os_get_syserr();
+ CloseHandle(event);
+ goto err;
+ }
+
+ CloseHandle(event);
+ }
+
+ return (0);
+
+err: __db_syserr(env, ret, "Win32 unlock failed");
+ return (__env_panic(env, __os_posix_err(ret)));
+}
+
+/*
+ * __db_win32_mutex_destroy --
+ * Destroy a mutex.
+ *
+ * PUBLIC: int __db_win32_mutex_destroy __P((ENV *, db_mutex_t));
+ */
+int
+__db_win32_mutex_destroy(env, mutex)
+ ENV *env;
+ db_mutex_t mutex;
+{
+ return (0);
+}
diff --git a/db-4.8.30/mutex/test_mutex.c b/db-4.8.30/mutex/test_mutex.c
new file mode 100644
index 0000000..3804996
--- /dev/null
+++ b/db-4.8.30/mutex/test_mutex.c
@@ -0,0 +1,1051 @@
+/*-
+ * See the file LICENSE for redistribution information.
+ *
+ * Copyright (c) 1999-2009 Oracle. All rights reserved.
+ *
+ * Standalone mutex tester for Berkeley DB mutexes.
+ *
+ * $Id$
+ */
+
+#include "db_config.h"
+
+#include "db_int.h"
+
+#ifdef DB_WIN32
+#define MUTEX_THREAD_TEST 1
+
+extern int getopt(int, char * const *, const char *);
+
+typedef HANDLE os_pid_t;
+typedef HANDLE os_thread_t;
+
+#define os_thread_create(thrp, attr, func, arg) \
+ (((*(thrp) = CreateThread(NULL, 0, \
+ (LPTHREAD_START_ROUTINE)(func), (arg), 0, NULL)) == NULL) ? -1 : 0)
+#define os_thread_join(thr, statusp) \
+ ((WaitForSingleObject((thr), INFINITE) == WAIT_OBJECT_0) && \
+ GetExitCodeThread((thr), (LPDWORD)(statusp)) ? 0 : -1)
+#define os_thread_self() GetCurrentThreadId()
+
+#else /* !DB_WIN32 */
+
+#include <sys/wait.h>
+
+typedef pid_t os_pid_t;
+
+/*
+ * There's only one mutex implementation that can't support thread-level
+ * locking: UNIX/fcntl mutexes.
+ *
+ * The general Berkeley DB library configuration doesn't look for the POSIX
+ * pthread functions, with one exception -- pthread_yield.
+ *
+ * Use these two facts to decide if we're going to build with or without
+ * threads.
+ */
+#if !defined(HAVE_MUTEX_FCNTL) && defined(HAVE_PTHREAD_YIELD)
+#define MUTEX_THREAD_TEST 1
+
+#include <pthread.h>
+
+typedef pthread_t os_thread_t;
+
+#define os_thread_create(thrp, attr, func, arg) \
+ pthread_create((thrp), (attr), (func), (arg))
+#define os_thread_join(thr, statusp) pthread_join((thr), (statusp))
+#define os_thread_self() pthread_self()
+#endif /* HAVE_PTHREAD_YIELD */
+#endif /* !DB_WIN32 */
+
+#define OS_BAD_PID ((os_pid_t)-1)
+
+#define TESTDIR "TESTDIR" /* Working area */
+#define MT_FILE "TESTDIR/mutex.file"
+#define MT_FILE_QUIT "TESTDIR/mutex.file.quit"
+
+/*
+ * The backing data layout:
+ * TM[1] per-thread mutex array lock
+ * TM[nthreads] per-thread mutex array
+ * TM[maxlocks] per-lock mutex array
+ */
+typedef struct {
+ db_mutex_t mutex; /* Mutex. */
+ u_long id; /* Holder's ID. */
+ u_int wakeme; /* Request to awake. */
+} TM;
+
+DB_ENV *dbenv; /* Backing environment */
+ENV *env;
+size_t len; /* Backing data chunk size. */
+
+u_int8_t *gm_addr; /* Global mutex */
+u_int8_t *lm_addr; /* Locker mutexes */
+u_int8_t *tm_addr; /* Thread mutexes */
+
+#ifdef MUTEX_THREAD_TEST
+os_thread_t *kidsp; /* Locker threads */
+os_thread_t wakep; /* Wakeup thread */
+#endif
+
+#ifndef HAVE_MMAP
+u_int nprocs = 1; /* -p: Processes. */
+u_int nthreads = 20; /* -t: Threads. */
+#elif MUTEX_THREAD_TEST
+u_int nprocs = 5; /* -p: Processes. */
+u_int nthreads = 4; /* -t: Threads. */
+#else
+u_int nprocs = 20; /* -p: Processes. */
+u_int nthreads = 1; /* -t: Threads. */
+#endif
+
+u_int maxlocks = 20; /* -l: Backing locks. */
+u_int nlocks = 10000; /* -n: Locks per process. */
+int verbose; /* -v: Verbosity. */
+
+const char *progname;
+
+void data_off(u_int8_t *, DB_FH *);
+void data_on(u_int8_t **, u_int8_t **, u_int8_t **, DB_FH **, int);
+int locker_start(u_long);
+int locker_wait(void);
+os_pid_t os_spawn(const char *, char *const[]);
+int os_wait(os_pid_t *, u_int);
+void *run_lthread(void *);
+void *run_wthread(void *);
+os_pid_t spawn_proc(u_long, char *, char *);
+void tm_env_close(void);
+int tm_env_init(void);
+void tm_mutex_destroy(void);
+void tm_mutex_init(void);
+void tm_mutex_stats(void);
+int usage(void);
+int wakeup_start(u_long);
+int wakeup_wait(void);
+
+int
+main(argc, argv)
+ int argc;
+ char *argv[];
+{
+ enum {LOCKER, WAKEUP, PARENT} rtype;
+ extern int optind;
+ extern char *optarg;
+ os_pid_t wakeup_pid, *pids;
+ u_long id;
+ u_int i;
+ DB_FH *fhp, *map_fhp;
+ int ch, err;
+ char *p, *tmpath, cmd[1024];
+
+ if ((progname = __db_rpath(argv[0])) == NULL)
+ progname = argv[0];
+ else
+ ++progname;
+
+ rtype = PARENT;
+ id = 0;
+ tmpath = argv[0];
+ while ((ch = getopt(argc, argv, "l:n:p:T:t:v")) != EOF)
+ switch (ch) {
+ case 'l':
+ maxlocks = (u_int)atoi(optarg);
+ break;
+ case 'n':
+ nlocks = (u_int)atoi(optarg);
+ break;
+ case 'p':
+ nprocs = (u_int)atoi(optarg);
+ break;
+ case 't':
+ if ((nthreads = (u_int)atoi(optarg)) == 0)
+ nthreads = 1;
+#if !defined(MUTEX_THREAD_TEST)
+ if (nthreads != 1) {
+ fprintf(stderr,
+ "%s: thread support not available or not compiled for this platform.\n",
+ progname);
+ return (EXIT_FAILURE);
+ }
+#endif
+ break;
+ case 'T':
+ if (!memcmp(optarg, "locker", sizeof("locker") - 1))
+ rtype = LOCKER;
+ else if (
+ !memcmp(optarg, "wakeup", sizeof("wakeup") - 1))
+ rtype = WAKEUP;
+ else
+ return (usage());
+ if ((p = strchr(optarg, '=')) == NULL)
+ return (usage());
+ id = (u_long)atoi(p + 1);
+ break;
+ case 'v':
+ verbose = 1;
+ break;
+ case '?':
+ default:
+ return (usage());
+ }
+ argc -= optind;
+ argv += optind;
+
+ /*
+ * If we're not running a multi-process test, we should be running
+ * a multi-thread test.
+ */
+ if (nprocs == 1 && nthreads == 1) {
+ fprintf(stderr,
+ "%s: running in a single process requires multiple threads\n",
+ progname);
+ return (EXIT_FAILURE);
+ }
+
+ len = sizeof(TM) * (1 + nthreads * nprocs + maxlocks);
+
+ /*
+ * In the multi-process test, the parent spawns processes that exec
+ * the original binary, ending up here. Each process joins the DB
+ * environment separately and then calls the supporting function.
+ */
+ if (rtype == LOCKER || rtype == WAKEUP) {
+ __os_yield(env, 3, 0); /* Let everyone catch up. */
+ /* Initialize random numbers. */
+ srand((u_int)time(NULL) % (u_int)getpid());
+
+ if (tm_env_init() != 0) /* Join the environment. */
+ exit(EXIT_FAILURE);
+ /* Join the backing data. */
+ data_on(&gm_addr, &tm_addr, &lm_addr, &map_fhp, 0);
+ if (verbose)
+ printf(
+ "Backing file: global (%#lx), threads (%#lx), locks (%#lx)\n",
+ (u_long)gm_addr, (u_long)tm_addr, (u_long)lm_addr);
+
+ if ((rtype == LOCKER ?
+ locker_start(id) : wakeup_start(id)) != 0)
+ exit(EXIT_FAILURE);
+ if ((rtype == LOCKER ? locker_wait() : wakeup_wait()) != 0)
+ exit(EXIT_FAILURE);
+
+ data_off(gm_addr, map_fhp); /* Detach from backing data. */
+
+ tm_env_close(); /* Detach from environment. */
+
+ exit(EXIT_SUCCESS);
+ }
+
+ /*
+ * The following code is only executed by the original parent process.
+ *
+ * Clean up from any previous runs.
+ */
+ snprintf(cmd, sizeof(cmd), "rm -rf %s", TESTDIR);
+ (void)system(cmd);
+ snprintf(cmd, sizeof(cmd), "mkdir %s", TESTDIR);
+ (void)system(cmd);
+
+ printf(
+ "%s: %u processes, %u threads/process, %u lock requests from %u locks\n",
+ progname, nprocs, nthreads, nlocks, maxlocks);
+ printf("%s: backing data %lu bytes\n", progname, (u_long)len);
+
+ if (tm_env_init() != 0) /* Create the environment. */
+ exit(EXIT_FAILURE);
+ /* Create the backing data. */
+ data_on(&gm_addr, &tm_addr, &lm_addr, &map_fhp, 1);
+ if (verbose)
+ printf(
+ "backing data: global (%#lx), threads (%#lx), locks (%#lx)\n",
+ (u_long)gm_addr, (u_long)tm_addr, (u_long)lm_addr);
+
+ tm_mutex_init(); /* Initialize mutexes. */
+
+ if (nprocs > 1) { /* Run the multi-process test. */
+ /* Allocate array of locker process IDs. */
+ if ((pids = calloc(nprocs, sizeof(os_pid_t))) == NULL) {
+ fprintf(stderr, "%s: %s\n", progname, strerror(errno));
+ goto fail;
+ }
+
+ /* Spawn locker processes and threads. */
+ for (i = 0; i < nprocs; ++i) {
+ if ((pids[i] =
+ spawn_proc(id, tmpath, "locker")) == OS_BAD_PID) {
+ fprintf(stderr,
+ "%s: failed to spawn a locker\n", progname);
+ goto fail;
+ }
+ id += nthreads;
+ }
+
+ /* Spawn wakeup process/thread. */
+ if ((wakeup_pid =
+ spawn_proc(id, tmpath, "wakeup")) == OS_BAD_PID) {
+ fprintf(stderr,
+ "%s: failed to spawn waker\n", progname);
+ goto fail;
+ }
+ ++id;
+
+ /* Wait for all lockers to exit. */
+ if ((err = os_wait(pids, nprocs)) != 0) {
+ fprintf(stderr, "%s: locker wait failed with %d\n",
+ progname, err);
+ goto fail;
+ }
+
+ /* Signal wakeup process to exit. */
+ if ((err = __os_open(
+ env, MT_FILE_QUIT, 0, DB_OSO_CREATE, 0664, &fhp)) != 0) {
+ fprintf(stderr,
+ "%s: open %s\n", progname, db_strerror(err));
+ goto fail;
+ }
+ (void)__os_closehandle(env, fhp);
+
+ /* Wait for wakeup process/thread. */
+ if ((err = os_wait(&wakeup_pid, 1)) != 0) {
+ fprintf(stderr, "%s: %lu: exited %d\n",
+ progname, (u_long)wakeup_pid, err);
+ goto fail;
+ }
+ } else { /* Run the single-process test. */
+ /* Spawn locker threads. */
+ if (locker_start(0) != 0)
+ goto fail;
+
+ /* Spawn wakeup thread. */
+ if (wakeup_start(nthreads) != 0)
+ goto fail;
+
+ /* Wait for all lockers to exit. */
+ if (locker_wait() != 0)
+ goto fail;
+
+ /* Signal wakeup process to exit. */
+ if ((err = __os_open(
+ env, MT_FILE_QUIT, 0, DB_OSO_CREATE, 0664, &fhp)) != 0) {
+ fprintf(stderr,
+ "%s: open %s\n", progname, db_strerror(err));
+ goto fail;
+ }
+ (void)__os_closehandle(env, fhp);
+
+ /* Wait for wakeup thread. */
+ if (wakeup_wait() != 0)
+ goto fail;
+ }
+
+ tm_mutex_stats(); /* Display run statistics. */
+ tm_mutex_destroy(); /* Destroy mutexes. */
+
+ data_off(gm_addr, map_fhp); /* Detach from backing data. */
+
+ tm_env_close(); /* Detach from environment. */
+
+ printf("%s: test succeeded\n", progname);
+ return (EXIT_SUCCESS);
+
+fail: printf("%s: FAILED!\n", progname);
+ return (EXIT_FAILURE);
+}
+
+int
+locker_start(id)
+ u_long id;
+{
+#if defined(MUTEX_THREAD_TEST)
+ u_int i;
+ int err;
+
+ /*
+ * Spawn off threads. We have nthreads all locking and going to
+ * sleep, and one other thread cycling through and waking them up.
+ */
+ if ((kidsp =
+ (os_thread_t *)calloc(sizeof(os_thread_t), nthreads)) == NULL) {
+ fprintf(stderr, "%s: %s\n", progname, strerror(errno));
+ return (1);
+ }
+ for (i = 0; i < nthreads; i++)
+ if ((err = os_thread_create(
+ &kidsp[i], NULL, run_lthread, (void *)(id + i))) != 0) {
+ fprintf(stderr, "%s: failed spawning thread: %s\n",
+ progname, db_strerror(err));
+ return (1);
+ }
+ return (0);
+#else
+ return (run_lthread((void *)id) == NULL ? 0 : 1);
+#endif
+}
+
+int
+locker_wait()
+{
+#if defined(MUTEX_THREAD_TEST)
+ u_int i;
+ void *retp;
+
+ /* Wait for the threads to exit. */
+ for (i = 0; i < nthreads; i++) {
+ (void)os_thread_join(kidsp[i], &retp);
+ if (retp != NULL) {
+ fprintf(stderr,
+ "%s: thread exited with error\n", progname);
+ return (1);
+ }
+ }
+ free(kidsp);
+#endif
+ return (0);
+}
+
+void *
+run_lthread(arg)
+ void *arg;
+{
+ TM *gp, *mp, *tp;
+ u_long id, tid;
+ u_int lock, nl;
+ int err, i;
+
+ id = (u_long)arg;
+#if defined(MUTEX_THREAD_TEST)
+ tid = (u_long)os_thread_self();
+#else
+ tid = 0;
+#endif
+ printf("Locker: ID %03lu (PID: %lu; TID: %lx)\n",
+ id, (u_long)getpid(), tid);
+
+ gp = (TM *)gm_addr;
+ tp = (TM *)(tm_addr + id * sizeof(TM));
+
+ for (nl = nlocks; nl > 0;) {
+ /* Select and acquire a data lock. */
+ lock = (u_int)rand() % maxlocks;
+ mp = (TM *)(lm_addr + lock * sizeof(TM));
+ if (verbose)
+ printf("%03lu: lock %d (mtx: %lu)\n",
+ id, lock, (u_long)mp->mutex);
+
+ if ((err = dbenv->mutex_lock(dbenv, mp->mutex)) != 0) {
+ fprintf(stderr, "%s: %03lu: never got lock %d: %s\n",
+ progname, id, lock, db_strerror(err));
+ return ((void *)1);
+ }
+ if (mp->id != 0) {
+ fprintf(stderr,
+ "%s: RACE! (%03lu granted lock %d held by %03lu)\n",
+ progname, id, lock, mp->id);
+ return ((void *)1);
+ }
+ mp->id = id;
+
+ /*
+ * Pretend to do some work, periodically checking to see if
+ * we still hold the mutex.
+ */
+ for (i = 0; i < 3; ++i) {
+ __os_yield(env, 0, (u_long)rand() % 3);
+ if (mp->id != id) {
+ fprintf(stderr,
+ "%s: RACE! (%03lu stole lock %d from %03lu)\n",
+ progname, mp->id, lock, id);
+ return ((void *)1);
+ }
+ }
+
+ /*
+ * Test self-blocking and unlocking by other threads/processes:
+ *
+ * acquire the global lock
+ * set our wakeup flag
+ * release the global lock
+ * acquire our per-thread lock
+ *
+ * The wakeup thread will wake us up.
+ */
+ if ((err = dbenv->mutex_lock(dbenv, gp->mutex)) != 0) {
+ fprintf(stderr, "%s: %03lu: global lock: %s\n",
+ progname, id, db_strerror(err));
+ return ((void *)1);
+ }
+ if (tp->id != 0 && tp->id != id) {
+ fprintf(stderr,
+ "%s: %03lu: per-thread mutex isn't mine, owned by %03lu\n",
+ progname, id, tp->id);
+ return ((void *)1);
+ }
+ tp->id = id;
+ if (verbose)
+ printf("%03lu: self-blocking (mtx: %lu)\n",
+ id, (u_long)tp->mutex);
+ if (tp->wakeme) {
+ fprintf(stderr,
+ "%s: %03lu: wakeup flag incorrectly set\n",
+ progname, id);
+ return ((void *)1);
+ }
+ tp->wakeme = 1;
+ if ((err = dbenv->mutex_unlock(dbenv, gp->mutex)) != 0) {
+ fprintf(stderr,
+ "%s: %03lu: global unlock: %s\n",
+ progname, id, db_strerror(err));
+ return ((void *)1);
+ }
+ if ((err = dbenv->mutex_lock(dbenv, tp->mutex)) != 0) {
+ fprintf(stderr, "%s: %03lu: per-thread lock: %s\n",
+ progname, id, db_strerror(err));
+ return ((void *)1);
+ }
+ /* Time passes... */
+ if (tp->wakeme) {
+ fprintf(stderr, "%s: %03lu: wakeup flag not cleared\n",
+ progname, id);
+ return ((void *)1);
+ }
+
+ if (verbose)
+ printf("%03lu: release %d (mtx: %lu)\n",
+ id, lock, (u_long)mp->mutex);
+
+ /* Release the data lock. */
+ mp->id = 0;
+ if ((err = dbenv->mutex_unlock(dbenv, mp->mutex)) != 0) {
+ fprintf(stderr,
+ "%s: %03lu: lock release: %s\n",
+ progname, id, db_strerror(err));
+ return ((void *)1);
+ }
+
+ if (--nl % 1000 == 0)
+ printf("%03lu: %d\n", id, nl);
+ }
+
+ return (NULL);
+}
+
+int
+wakeup_start(id)
+ u_long id;
+{
+#if defined(MUTEX_THREAD_TEST)
+ int err;
+
+ /*
+ * Spawn off wakeup thread.
+ */
+ if ((err = os_thread_create(
+ &wakep, NULL, run_wthread, (void *)id)) != 0) {
+ fprintf(stderr, "%s: failed spawning wakeup thread: %s\n",
+ progname, db_strerror(err));
+ return (1);
+ }
+ return (0);
+#else
+ return (run_wthread((void *)id) == NULL ? 0 : 1);
+#endif
+}
+
+int
+wakeup_wait()
+{
+#if defined(MUTEX_THREAD_TEST)
+ void *retp;
+
+ /*
+ * A file is created when the wakeup thread is no longer needed.
+ */
+ (void)os_thread_join(wakep, &retp);
+ if (retp != NULL) {
+ fprintf(stderr,
+ "%s: wakeup thread exited with error\n", progname);
+ return (1);
+ }
+#endif
+ return (0);
+}
+
+/*
+ * run_wthread --
+ * Thread to wake up other threads that are sleeping.
+ */
+void *
+run_wthread(arg)
+ void *arg;
+{
+ TM *gp, *tp;
+ u_long id, tid;
+ u_int check_id;
+ int err, quitcheck;
+
+ id = (u_long)arg;
+ quitcheck = 0;
+#if defined(MUTEX_THREAD_TEST)
+ tid = (u_long)os_thread_self();
+#else
+ tid = 0;
+#endif
+ printf("Wakeup: ID %03lu (PID: %lu; TID: %lx)\n",
+ id, (u_long)getpid(), tid);
+
+ gp = (TM *)gm_addr;
+
+ /* Loop, waking up sleepers and periodically sleeping ourselves. */
+ for (check_id = 0;; ++check_id) {
+ /* Check to see if the locking threads have finished. */
+ if (++quitcheck >= 100) {
+ quitcheck = 0;
+ if (__os_exists(env, MT_FILE_QUIT, NULL) == 0)
+ break;
+ }
+
+ /* Check for ID wraparound. */
+ if (check_id == nthreads * nprocs)
+ check_id = 0;
+
+ /* Check for a thread that needs a wakeup. */
+ tp = (TM *)(tm_addr + check_id * sizeof(TM));
+ if (!tp->wakeme)
+ continue;
+
+ if (verbose) {
+ printf("%03lu: wakeup thread %03lu (mtx: %lu)\n",
+ id, tp->id, (u_long)tp->mutex);
+ (void)fflush(stdout);
+ }
+
+ /* Acquire the global lock. */
+ if ((err = dbenv->mutex_lock(dbenv, gp->mutex)) != 0) {
+ fprintf(stderr, "%s: wakeup: global lock: %s\n",
+ progname, db_strerror(err));
+ return ((void *)1);
+ }
+
+ tp->wakeme = 0;
+ if ((err = dbenv->mutex_unlock(dbenv, tp->mutex)) != 0) {
+ fprintf(stderr, "%s: wakeup: unlock: %s\n",
+ progname, db_strerror(err));
+ return ((void *)1);
+ }
+
+ if ((err = dbenv->mutex_unlock(dbenv, gp->mutex)) != 0) {
+ fprintf(stderr, "%s: wakeup: global unlock: %s\n",
+ progname, db_strerror(err));
+ return ((void *)1);
+ }
+
+ __os_yield(env, 0, (u_long)rand() % 3);
+ }
+ return (NULL);
+}
+
+/*
+ * tm_env_init --
+ * Create the backing database environment.
+ */
+int
+tm_env_init()
+{
+ u_int32_t flags;
+ int ret;
+ char *home;
+
+ /*
+ * Create an environment object and initialize it for error
+ * reporting.
+ */
+ if ((ret = db_env_create(&dbenv, 0)) != 0) {
+ fprintf(stderr, "%s: %s\n", progname, db_strerror(ret));
+ return (1);
+ }
+ env = dbenv->env;
+ dbenv->set_errfile(dbenv, stderr);
+ dbenv->set_errpfx(dbenv, progname);
+
+ /* Allocate enough mutexes. */
+ if ((ret = dbenv->mutex_set_increment(dbenv,
+ 1 + nthreads * nprocs + maxlocks)) != 0) {
+ dbenv->err(dbenv, ret, "dbenv->mutex_set_increment");
+ return (1);
+ }
+
+ flags = DB_CREATE;
+ if (nprocs == 1) {
+ home = NULL;
+ flags |= DB_PRIVATE;
+ } else
+ home = TESTDIR;
+ if (nthreads != 1)
+ flags |= DB_THREAD;
+ if ((ret = dbenv->open(dbenv, home, flags, 0)) != 0) {
+ dbenv->err(dbenv, ret, "environment open: %s", home);
+ return (1);
+ }
+
+ return (0);
+}
+
+/*
+ * tm_env_close --
+ * Close the backing database environment.
+ */
+void
+tm_env_close()
+{
+ (void)dbenv->close(dbenv, 0);
+}
+
+/*
+ * tm_mutex_init --
+ * Initialize the mutexes.
+ */
+void
+tm_mutex_init()
+{
+ TM *mp;
+ u_int i;
+ int err;
+
+ if (verbose)
+ printf("Allocate the global mutex: ");
+ mp = (TM *)gm_addr;
+ if ((err = dbenv->mutex_alloc(dbenv, 0, &mp->mutex)) != 0) {
+ fprintf(stderr, "%s: DB_ENV->mutex_alloc (global): %s\n",
+ progname, db_strerror(err));
+ exit(EXIT_FAILURE);
+ }
+ if (verbose)
+ printf("%lu\n", (u_long)mp->mutex);
+
+ if (verbose)
+ printf(
+ "Allocate %d per-thread, self-blocking mutexes: ",
+ nthreads * nprocs);
+ for (i = 0; i < nthreads * nprocs; ++i) {
+ mp = (TM *)(tm_addr + i * sizeof(TM));
+ if ((err = dbenv->mutex_alloc(
+ dbenv, DB_MUTEX_SELF_BLOCK, &mp->mutex)) != 0) {
+ fprintf(stderr,
+ "%s: DB_ENV->mutex_alloc (per-thread %d): %s\n",
+ progname, i, db_strerror(err));
+ exit(EXIT_FAILURE);
+ }
+ if ((err = dbenv->mutex_lock(dbenv, mp->mutex)) != 0) {
+ fprintf(stderr,
+ "%s: DB_ENV->mutex_lock (per-thread %d): %s\n",
+ progname, i, db_strerror(err));
+ exit(EXIT_FAILURE);
+ }
+ if (verbose)
+ printf("%lu ", (u_long)mp->mutex);
+ }
+ if (verbose)
+ printf("\n");
+
+ if (verbose)
+ printf("Allocate %d per-lock mutexes: ", maxlocks);
+ for (i = 0; i < maxlocks; ++i) {
+ mp = (TM *)(lm_addr + i * sizeof(TM));
+ if ((err = dbenv->mutex_alloc(dbenv, 0, &mp->mutex)) != 0) {
+ fprintf(stderr,
+ "%s: DB_ENV->mutex_alloc (per-lock: %d): %s\n",
+ progname, i, db_strerror(err));
+ exit(EXIT_FAILURE);
+ }
+ if (verbose)
+ printf("%lu ", (u_long)mp->mutex);
+ }
+ if (verbose)
+ printf("\n");
+}
+
+/*
+ * tm_mutex_destroy --
+ * Destroy the mutexes.
+ */
+void
+tm_mutex_destroy()
+{
+ TM *gp, *mp;
+ u_int i;
+ int err;
+
+ if (verbose)
+ printf("Destroy the global mutex.\n");
+ gp = (TM *)gm_addr;
+ if ((err = dbenv->mutex_free(dbenv, gp->mutex)) != 0) {
+ fprintf(stderr, "%s: DB_ENV->mutex_free (global): %s\n",
+ progname, db_strerror(err));
+ exit(EXIT_FAILURE);
+ }
+
+ if (verbose)
+ printf("Destroy the per-thread mutexes.\n");
+ for (i = 0; i < nthreads * nprocs; ++i) {
+ mp = (TM *)(tm_addr + i * sizeof(TM));
+ if ((err = dbenv->mutex_free(dbenv, mp->mutex)) != 0) {
+ fprintf(stderr,
+ "%s: DB_ENV->mutex_free (per-thread %d): %s\n",
+ progname, i, db_strerror(err));
+ exit(EXIT_FAILURE);
+ }
+ }
+
+ if (verbose)
+ printf("Destroy the per-lock mutexes.\n");
+ for (i = 0; i < maxlocks; ++i) {
+ mp = (TM *)(lm_addr + i * sizeof(TM));
+ if ((err = dbenv->mutex_free(dbenv, mp->mutex)) != 0) {
+ fprintf(stderr,
+ "%s: DB_ENV->mutex_free (per-lock: %d): %s\n",
+ progname, i, db_strerror(err));
+ exit(EXIT_FAILURE);
+ }
+ }
+}
+
+/*
+ * tm_mutex_stats --
+ * Display mutex statistics.
+ */
+void
+tm_mutex_stats()
+{
+#ifdef HAVE_STATISTICS
+ TM *mp;
+ uintmax_t set_wait, set_nowait;
+ u_int i;
+
+ printf("Per-lock mutex statistics.\n");
+ for (i = 0; i < maxlocks; ++i) {
+ mp = (TM *)(lm_addr + i * sizeof(TM));
+ __mutex_set_wait_info(env, mp->mutex, &set_wait, &set_nowait);
+ printf("mutex %2d: wait: %lu; no wait %lu\n", i,
+ (u_long)set_wait, (u_long)set_nowait);
+ }
+#endif
+}
+
+/*
+ * data_on --
+ * Map in or allocate the backing data space.
+ */
+void
+data_on(gm_addrp, tm_addrp, lm_addrp, fhpp, init)
+ u_int8_t **gm_addrp, **tm_addrp, **lm_addrp;
+ DB_FH **fhpp;
+ int init;
+{
+ DB_FH *fhp;
+ size_t nwrite;
+ int err;
+ void *addr;
+
+ fhp = NULL;
+
+ /*
+ * In a single process, use heap memory.
+ */
+ if (nprocs == 1) {
+ if (init) {
+ if ((err =
+ __os_calloc(env, (size_t)len, 1, &addr)) != 0)
+ exit(EXIT_FAILURE);
+ } else {
+ fprintf(stderr,
+ "%s: init should be set for single process call\n",
+ progname);
+ exit(EXIT_FAILURE);
+ }
+ } else {
+ if (init) {
+ if (verbose)
+ printf("Create the backing file.\n");
+
+ if ((err = __os_open(env, MT_FILE, 0,
+ DB_OSO_CREATE | DB_OSO_TRUNC, 0666, &fhp)) == -1) {
+ fprintf(stderr, "%s: %s: open: %s\n",
+ progname, MT_FILE, db_strerror(err));
+ exit(EXIT_FAILURE);
+ }
+
+ if ((err =
+ __os_seek(env, fhp, 0, 0, (u_int32_t)len)) != 0 ||
+ (err =
+ __os_write(env, fhp, &err, 1, &nwrite)) != 0 ||
+ nwrite != 1) {
+ fprintf(stderr, "%s: %s: seek/write: %s\n",
+ progname, MT_FILE, db_strerror(err));
+ exit(EXIT_FAILURE);
+ }
+ } else
+ if ((err = __os_open(env, MT_FILE, 0, 0, 0, &fhp)) != 0)
+ exit(EXIT_FAILURE);
+
+ if ((err =
+ __os_mapfile(env, MT_FILE, fhp, len, 0, &addr)) != 0)
+ exit(EXIT_FAILURE);
+ }
+
+ *gm_addrp = (u_int8_t *)addr;
+ addr = (u_int8_t *)addr + sizeof(TM);
+ *tm_addrp = (u_int8_t *)addr;
+ addr = (u_int8_t *)addr + sizeof(TM) * (nthreads * nprocs);
+ *lm_addrp = (u_int8_t *)addr;
+
+ if (fhpp != NULL)
+ *fhpp = fhp;
+}
+
+/*
+ * data_off --
+ * Discard or de-allocate the backing data space.
+ */
+void
+data_off(addr, fhp)
+ u_int8_t *addr;
+ DB_FH *fhp;
+{
+ if (nprocs == 1)
+ __os_free(env, addr);
+ else {
+ if (__os_unmapfile(env, addr, len) != 0)
+ exit(EXIT_FAILURE);
+ if (__os_closehandle(env, fhp) != 0)
+ exit(EXIT_FAILURE);
+ }
+}
+
+/*
+ * usage --
+ *
+ */
+int
+usage()
+{
+ fprintf(stderr, "usage: %s %s\n\t%s\n", progname,
+ "[-v] [-l maxlocks]",
+ "[-n locks] [-p procs] [-T locker=ID|wakeup=ID] [-t threads]");
+ return (EXIT_FAILURE);
+}
+
+/*
+ * os_wait --
+ * Wait for an array of N procs.
+ */
+int
+os_wait(procs, n)
+ os_pid_t *procs;
+ u_int n;
+{
+ u_int i;
+ int status;
+#if defined(DB_WIN32)
+ DWORD ret;
+#endif
+
+ status = 0;
+
+#if defined(DB_WIN32)
+ do {
+ ret = WaitForMultipleObjects(n, procs, FALSE, INFINITE);
+ i = ret - WAIT_OBJECT_0;
+ if (i < 0 || i >= n)
+ return (__os_posix_err(__os_get_syserr()));
+
+ if ((GetExitCodeProcess(procs[i], &ret) == 0) || (ret != 0))
+ return (ret);
+
+ /* remove the process handle from the list */
+ while (++i < n)
+ procs[i - 1] = procs[i];
+ } while (--n);
+#elif !defined(HAVE_VXWORKS)
+ do {
+ if (wait(&status) == -1)
+ return (__os_posix_err(__os_get_syserr()));
+
+ if (WIFEXITED(status) == 0 || WEXITSTATUS(status) != 0) {
+ for (i = 0; i < n; i++)
+ (void)kill(procs[i], SIGKILL);
+ return (WEXITSTATUS(status));
+ }
+ } while (--n);
+#endif
+
+ return (0);
+}
+
+os_pid_t
+spawn_proc(id, tmpath, typearg)
+ u_long id;
+ char *tmpath, *typearg;
+{
+ char *const vbuf = verbose ? "-v" : NULL;
+ char *args[13], lbuf[16], nbuf[16], pbuf[16], tbuf[16], Tbuf[256];
+
+ args[0] = tmpath;
+ args[1] = "-l";
+ snprintf(lbuf, sizeof(lbuf), "%d", maxlocks);
+ args[2] = lbuf;
+ args[3] = "-n";
+ snprintf(nbuf, sizeof(nbuf), "%d", nlocks);
+ args[4] = nbuf;
+ args[5] = "-p";
+ snprintf(pbuf, sizeof(pbuf), "%d", nprocs);
+ args[6] = pbuf;
+ args[7] = "-t";
+ snprintf(tbuf, sizeof(tbuf), "%d", nthreads);
+ args[8] = tbuf;
+ args[9] = "-T";
+ snprintf(Tbuf, sizeof(Tbuf), "%s=%lu", typearg, id);
+ args[10] = Tbuf;
+ args[11] = vbuf;
+ args[12] = NULL;
+
+ return (os_spawn(tmpath, args));
+}
+
+os_pid_t
+os_spawn(path, argv)
+ const char *path;
+ char *const argv[];
+{
+ os_pid_t pid;
+ int status;
+
+ COMPQUIET(pid, 0);
+ COMPQUIET(status, 0);
+
+#ifdef HAVE_VXWORKS
+ fprintf(stderr, "%s: os_spawn not supported for VxWorks.\n", progname);
+ return (OS_BAD_PID);
+#elif defined(HAVE_QNX)
+ /*
+ * For QNX, we cannot fork if we've ever used threads. So
+ * we'll use their spawn function. We use 'spawnl' which
+ * is NOT a POSIX function.
+ *
+ * The return value of spawnl is just what we want depending
+ * on the value of the 'wait' arg.
+ */
+ return (spawnv(P_NOWAIT, path, argv));
+#elif defined(DB_WIN32)
+ return (os_pid_t)(_spawnv(P_NOWAIT, path, argv));
+#else
+ if ((pid = fork()) != 0) {
+ if (pid == -1)
+ return (OS_BAD_PID);
+ return (pid);
+ } else {
+ (void)execv(path, argv);
+ exit(EXIT_FAILURE);
+ }
+#endif
+}
diff --git a/db-4.8.30/mutex/uts4_cc.s b/db-4.8.30/mutex/uts4_cc.s
new file mode 100644
index 0000000..1c67c8b
--- /dev/null
+++ b/db-4.8.30/mutex/uts4_cc.s
@@ -0,0 +1,26 @@
+ / See the file LICENSE for redistribution information.
+ /
+ / Copyright (c) 1997-2009 Oracle. All rights reserved.
+ /
+ / $Id$
+ /
+ / int uts_lock ( int *p, int i );
+ / Update the lock word pointed to by p with the
+ / value i, using compare-and-swap.
+ / Returns 0 if update was successful.
+ / Returns 1 if update failed.
+ /
+ entry uts_lock
+ uts_lock:
+ using .,r15
+ st r2,8(sp) / Save R2
+ l r2,64+0(sp) / R2 -> word to update
+ slr r0, r0 / R0 = current lock value must be 0
+ l r1,64+4(sp) / R1 = new lock value
+ cs r0,r1,0(r2) / Try the update ...
+ be x / ... Success. Return 0
+ la r0,1 / ... Failure. Return 1
+ x: /
+ l r2,8(sp) / Restore R2
+ b 2(,r14) / Return to caller
+ drop r15