How to make one task(insert) wait for another to task(remove) in Objective C?
How to make one operation wait for another to complete in Objective C
I’m sorry but I don’t understand your question. As written the answer would be “call the method that does the ‘insert’ before calling the method that does the ‘delete’”:
- (void)doInsertThenDelete {
[self insert];
[self delete];
}
but I’m pretty sure that’s not what you’re looking for. Please clarify your requirements.
Share and Enjoy
—
Quinn “The Eskimo!” @ Developer Technical Support @ Apple
let myEmail = "eskimo" + "1" + "@" + "apple.com"
When a condition satisfies(example buffer overflow), then the insert operation should wait until someone calls delete operation so that buffer will not overflow.
Is this a duplicate of the question in your other thread. If so, I’ll answer over there. If not, please explain how it’s different.
Share and Enjoy
—
Quinn “The Eskimo!” @ Developer Technical Support @ Apple
let myEmail = "eskimo" + "1" + "@" + "apple.com"
No this is not duplicate. Other thread deals with buffer overflow(array count should not exceed). This thread talks about read/write access. When mutablearray is getting updated(add/remove/insert/exchange), the read operation should be blocked and once write operation is complete, then read operation should continue. Atomic property will help to avoid partial but it won't stop reading the array when its getting updated
One Shared Resource, Multiple Readers, and a Single Writer
You can implement a read/write lock using the pthreads. To get started, see the pthread_rwlock_init
man page.
Having said that, my experience is that read/write locks are more complex than they’re worth. It’s usually better to use a lighter-weight lock, like os_unfair_lock
, and then reduce the amount of work that you do with the lock held. For example, imagine you’re using a dictionary as a cache. Don’t do this:
lock
look up the item
if it's not present
allocate a new item
add it to the dictionary
end if
unlock
but instead do this:
lock
look up the item
unlock
if the item wasn't present
allocate a new item
lock
look up the item again
if it's not present
add it to the dictionary
unlock
if you didn't add the item to the dictionary
deallocate it
end if
end if
This limits the amount of code that you run under the lock, which has two benefits:
-
It reduces lock contention, and thus avoids the need for a read/write lock.
-
It helps you avoid deadlocks. If you call your
allocate a new item
code with the lock held, it’s easy to trigger a deadlock deep in that code [1].
Share and Enjoy
—
Quinn “The Eskimo!” @ Developer Technical Support @ Apple
let myEmail = "eskimo" + "1" + "@" + "apple.com"
[1] Some folks use a recursive lock to avoid but that’s another technology that I generally recommend against.
Thanks Quinn, I believe you are suggesting the following implementation. Please correct me if I am wrong.
@interface ThreadSafeArray()
@property (strong, nonatomic) NSMutableArray *data;
@implementation ThreadSafeArray
-
(instancetype)init { if (self = [super init]) {
} return self;
}
- (id)peek { __block id result = nil; dispatch_sync(queue, ^{ result = [self.data firstObject] }); return result;
}
- (NSUInteger)length { __block NSUInteger count = 0; dispatch_sync(queue, ^{ result = [self.data count] }); return count;
}
- (void)enqueue:(id)datum { dispatch_async(queue, ^{
[NSLock lock]; [self.data addObject:datum] ; [NSLock unlock]; }); } @end
Can we barrier async instead of lock as shown below.
@interface ThreadSafeArray()
@property (strong, nonatomic) NSMutableArray *data; @property (strong, nonatomic) dispatch_queue_t queue;
@implementation ThreadSafeArray
- (instancetype)init { if (self = [super init]) { queue = dispatch_queue_create("ThreadSafeArray", DISPATCH_QUEUE_CONCURRENT); } return self;
}
- (id)peek { __block id result = nil; dispatch_sync(queue, ^{ result = [self.data firstObject] }); return result;
}
- (NSUInteger)length { __block NSUInteger count = 0; dispatch_sync(queue, ^{ result = [self.data count] }); return count;
}
- (void)enqueue:(id)datum { dispatch_barrier_async(queue, ^{ [self.data addObject:datum] });
} @end
Please correct me if I am wrong.
Honestly I can’t read your code because of the formatting. Can you post it again, this time using the Code Block button to format is… well… a code block (or just surround it with triple backquotes).
Share and Enjoy
—
Quinn “The Eskimo!” @ Developer Technical Support @ Apple
let myEmail = "eskimo" + "1" + "@" + "apple.com"
Thanks Quinn, I believe you are suggesting the following implementation. Please correct me if I am wrong.
@property (strong, nonatomic) NSMutableArray *data;
@end
@implementation ThreadSafeArray
-(instancetype)init {
if (self = [super init]) {
}
return self;
}
-(id)peek {
__block id result = nil;
dispatch_sync(queue, ^{
result = [self.data firstObject]
});
return result;
}
-(NSUInteger)length {
__block NSUInteger count = 0;
dispatch_sync(queue, ^{
result = [self.data count]
});
return count;
}
-(void)enqueue:(id)datum {
dispatch_async(queue, ^{
[NSLock lock];
[self.data addObject:datum] ;
[NSLock unlock];
});
}
@end
Can we barrier async instead of lock as shown below.
@interface ThreadSafeArray()
@property (strong, nonatomic) NSMutableArray *data; @property (strong, nonatomic) dispatch_queue_t queue;
@end
@implementation ThreadSafeArray
- (instancetype)init {
if (self = [super init]) {
queue = dispatch_queue_create("ThreadSafeArray", DISPATCH_QUEUE_CONCURRENT);
}
return self;
}
- (id)peek {
__block id result = nil;
dispatch_sync(queue, ^{
result = [self.data firstObject];
});
return result;
}
- (NSUInteger)length {
__block NSUInteger count = 0;
dispatch_sync(queue, ^{
result = [self.data count]
});
return count;
}
- (void)enqueue:(id)datum {
dispatch_barrier_async(queue, ^{
[self.data addObject:datum];
});
}
@end
I believe you are suggesting the following implementation.
No, that’s not what I suggested.
Using a Dispatch queue for a tiny critical section like this is complete overkill. If you have to use such a tiny critical section then an os_unfair_lock
is what I recommend. My general advice, however, is that you not create such a tiny critical section but rather manage your concurrency at a higher level.
Can we barrier async instead of lock as shown below.
It seems that this code is taking a bad idea, using a Dispatch queue as a lock for a tiny critical section, and then applying an additional bad idea on top of that, concurrent queues.
Share and Enjoy
—
Quinn “The Eskimo!” @ Developer Technical Support @ Apple
let myEmail = "eskimo" + "1" + "@" + "apple.com"
You said "My general advice, however, is that you not create such a tiny critical section but rather manage your concurrency at a higher level."
Could you please add more context and code snippet if possible?
Could you please add more context and code snippet if possible?
My recommendation is that you work at a much higher level. In this thread you’re trying to create a thread-safe data structure but when I write concurrent code I try to structure my code so that I don’t have to worry about the thread safety of specific components. Rather:
-
I use the highest-level API that’s available.
-
I try to confine data structures so that mutable state is only visible to one thread.
For an example of this, see Technote 2109 Simple and Reliable Threading with NSOperation and its associated ListAdder sample code.
Share and Enjoy
—
Quinn “The Eskimo!” @ Developer Technical Support @ Apple
let myEmail = "eskimo" + "1" + "@" + "apple.com"