grand central dispatch - NSOperation or GCD threading with animations -


i'm trying run sequence of disk-based, old-school avi animations (a b c...), back-to-back, nice transition in between.

i'm looking little guidance admittedly not having done threading work time, , never having done nsoperation or gcd.

these animations run @ 30fps , typically less minute in duration, along coreimage-assisted transition in between. timing-wise, things pretty tight, multi-threading needed. since i'm using ssd disk-access, read rate (theoretically) @ double consumption rate, there's still chunk of post-read , pre-display processing occur lag whole process if singly-threaded -- not mention being bad idea.

here's flow: firstly, read starting frame(s) of animation (possibly using serial nsoperation queue these reads). have raw data in nsmutablearray of objects. then, each frame that's been read, convert data in array coreimage format (either using similar separate serial "rendered" queue, or completion handler on each frame read disk).

(a wrinkle: if animation isn't in avi format, i'll using avassetimagegenerator , generatecgimagesasynchronouslyfortimes generate rendered results instead).

continue process producer-like queue through whole file, throttling after 2-3 seconds of loaded , converted data. treat resulting array of image data circular, bounded buffer.

there's separate consumer queue (a cvdisplaylink vertical-blanking call) pulls items off rendered queue. this'll on main thread @ 60hz. i'll drawing rendered image on off-cycle, , swapping in on cycle, 30fps throughput.

once we're satisfied animation "a" running smoothly (eg. after 5 seconds), spin yet serial queue begin pairing frames upcoming transition... if there "n" frames in animation "a", read frame n-15 (half-second end) of a, matching first frame of animation "b", , send 2 frames off coreimage transitioning via cifilter. continue matching frame n-14 (a) frame 2 (b), , on. obviously, each of these frame reads require conversion, , storage in separate data structure. this'll create nice upcoming 1/2 second transition.

when comes time display animation transition, sub in these transitioned frames display, carry on displaying rest of animation b... spin animation c transitioning, etc...

any pointers on start?

your situation little more complex situation outlined in apple's documentation, read (and if you're still saying, "huh?" after reading that, go read this answer) understand intended pattern. in short, general idea producer "drives" chain, , gcd's hooks in os make sure things being dispatched appropriately based on state of various things in kernel.

the problem approach w/r/t problem here it's not straightforward let producer side drive things here because consumer driven in real-time vertical blanking callbacks, , not purely availability of consumable resources. case further complicated inherently serial nature of workflow -- instance, if theoretically parallelize decoding of frame data images, images still have delivered serially next stage in pipeline, case not handled gcd api in streaming cases (i.e. easy dispatch_apply if have in memory @ once, cuts heart of problem: need happen in quasi-streaming context.)

in trying think of how 1 might handle this, came following example attempts simulate situation using text files each line in file "frame" in video, , "crossfades" 2 clips appending strings. full, working (for me, @ least) version of available here. code meant illustrate how might architect processing pipeline using gcd primitives, , using (largely) producer-driven pattern, while still linking cvdisplaylink-based consumer.

it not bulletproof (i.e. among many other things, it's not tolerant of file fewer frames in needed overlap) , may totally fail address real-time or memory-use bounding requirements (which hard me replicate & test without doing more work i'm willing do. :) ) doesn't try address issue mentioned above might able parallelize workloads need re-serialized before next stage of pipeline. (code assumes arc.) caveats, there's still interesting/relevant ideas here you. here's code:

static void dieonerror(int error); static nsstring* nsstringfromdispatchdata(dispatch_data_t data); static dispatch_data_t framedatafromaccumulator(dispatch_data_t* accumulator); static cvreturn mydisplaylinkcallback(cvdisplaylinkref displaylink, const cvtimestamp* now, const cvtimestamp* outputtime, cvoptionflags flagsin, cvoptionflags* flagsout, void* displaylinkcontext);  static const nsuinteger kframestooverlap = 15;  @implementation soappdelegate {     // display link state     cvdisplaylinkref mdisplaylink;      // state our file reading process -- protected via mframereadqueue     dispatch_queue_t mframereadqueue;     nsuinteger mfileindex; // keep track of file we're reading     dispatch_io_t mreadingchannel; // channel reading     dispatch_data_t mframereadaccumulator; // keep track of left-over data across read operations      // state processing raw frame data delivered read process - protected via mframedataprocessingqueue     dispatch_queue_t mframedataprocessingqueue;     nsmutablearray* mfilesforoverlapping;     nsmutablearray* mframearraysforoverlapping;      // state blending frames (or passing them through)     dispatch_queue_t mframeblendingqueue;      // delivery state     dispatch_queue_t mframedeliveryqueue; // suspended/resumed deliver 1 frame @ time     dispatch_queue_t mframedeliverystatequeue; // protects access ivars     dispatch_data_t mdeliveredframe; // data of frame has been delivered, not yet picked cvdisplaylink     nsinteger mlastframedelivered; // counter of frames delivered     nsinteger mlastframedisplayed; // counter of frames displayed }  - (void)applicationdidfinishlaunching:(nsnotification *)anotification {     mfileindex = 1;     mlastframedelivered = -1;     mlastframedisplayed = -1;      mframereadqueue = dispatch_queue_create("mframereadqueue", dispatch_queue_serial);     mframedataprocessingqueue = dispatch_queue_create("mframedataprocessingqueue", dispatch_queue_serial);     mframeblendingqueue = dispatch_queue_create("mframeblendingqueue", dispatch_queue_serial);     mframedeliveryqueue = dispatch_queue_create("mframedeliveryqueue", dispatch_queue_serial);     mframedeliverystatequeue = dispatch_queue_create("mframedeliverystatequeue", dispatch_queue_serial);      cvdisplaylinkcreatewithactivecgdisplays(&mdisplaylink);     cvdisplaylinksetoutputcallback(mdisplaylink, &mydisplaylinkcallback, (__bridge void*)self);      [self readnextfile]; }  - (void)dealloc {     if (mdisplaylink)     {         if (cvdisplaylinkisrunning(mdisplaylink))         {             cvdisplaylinkstop(mdisplaylink);         }         cvdisplaylinkrelease(mdisplaylink);     } }  - (void)readnextfile {     dispatch_async (mframereadqueue, ^{         nsurl* url = [[nsbundle mainbundle] urlforresource: [nsstring stringwithformat: @"file%lu", mfileindex++] withextension: @"txt"];          if (!url)             return;          if (mreadingchannel)         {             dispatch_io_close(mreadingchannel, dispatch_io_stop);             mreadingchannel = nil;         }          // don't care queue cleanup handler gets called on, because know there's ever 1 file being read @ time         mreadingchannel = dispatch_io_create_with_path(dispatch_io_stream, [[url path] filesystemrepresentation], o_rdonly|o_nonblock, 0, mframereadqueue, ^(int error) {             dieonerror(error);              mreadingchannel = nil;              // start next file             [self readnextfile];         });          // don't care queue read handlers called on, because know they're inherently serial         dispatch_io_read(mreadingchannel, 0, size_max, mframereadqueue, ^(bool done, dispatch_data_t data, int error) {             dieonerror(error);              // grab frames             dispatch_data_t localaccumulator = mframereadaccumulator ? dispatch_data_create_concat(mframereadaccumulator, data) : data;             dispatch_data_t framedata = nil;                         {                 framedata = framedatafromaccumulator(&localaccumulator);                 mframereadaccumulator = localaccumulator;                 [self processframedata: framedata fromfile: url];             } while (framedata);              if (done)             {                 dispatch_io_close(mreadingchannel, dispatch_io_stop);             }         });     }); }  - (void)processframedata: (dispatch_data_t)framedata fromfile: (nsurl*)file {     if (!framedata || !file)         return;      // want data blobs constituting each frame processed serially     dispatch_async(mframedataprocessingqueue, ^{         mfilesforoverlapping = mfilesforoverlapping ?: [nsmutablearray array];         mframearraysforoverlapping = mframearraysforoverlapping ?: [nsmutablearray array];          nsmutablearray* arraytoaddto = nil;         if ([file isequal: mfilesforoverlapping.lastobject])         {             arraytoaddto = mframearraysforoverlapping.lastobject;         }         else         {             arraytoaddto = [nsmutablearray array];             [mfilesforoverlapping addobject: file];             [mframearraysforoverlapping addobject: arraytoaddto];         }          [arraytoaddto addobject: framedata];          // we've gotten file two, , have enough frames process overlap         if (mframearraysforoverlapping.count == 2 && [mframearraysforoverlapping[1] count] >= kframestooverlap)         {             nsmutablearray* fileoneframes = mframearraysforoverlapping[0];             nsmutablearray* filetwoframes = mframearraysforoverlapping[1];              (nsuinteger = 0; < kframestooverlap; ++i)             {                 [self blendoneframe:fileoneframes[0] withotherframe: filetwoframes[0]];                 [fileoneframes removeobjectatindex:0];                 [filetwoframes removeobjectatindex:0];             }              [mfilesforoverlapping removeobjectatindex: 0];             [mframearraysforoverlapping removeobjectatindex: 0];         }          // we're pulling in frames file 1, haven't gotten file 2 yet, have more enough overlap         while (mframearraysforoverlapping.count == 1 && [mframearraysforoverlapping[0] count] > kframestooverlap)         {             nsmutablearray* framearray = mframearraysforoverlapping[0];             dispatch_data_t first = framearray[0];             [mframearraysforoverlapping[0] removeobjectatindex: 0];             [self blendoneframe: first withotherframe: nil];         }     }); }  - (void)blendoneframe: (dispatch_data_t)framea withotherframe: (dispatch_data_t)frameb {     dispatch_async(mframeblendingqueue, ^{         nsstring* blendedframe = [nsstring stringwithformat: @"%@%@", [nsstringfromdispatchdata(framea) stringbyreplacingoccurrencesofstring: @"\n" withstring:@""], nsstringfromdispatchdata(frameb)];         dispatch_data_t blendedframedata = dispatch_data_create(blendedframe.utf8string, blendedframe.length, null, dispatch_data_destructor_default);         [self deliverframefordisplay: blendedframedata];     }); }  - (void)deliverframefordisplay: (dispatch_data_t)frame {     // suspending queue within block, , virtue of being serial queue, guarantee     // 1 task called each call dispatch_resume on queue...      dispatch_async(mframedeliveryqueue, ^{         dispatch_suspend(mframedeliveryqueue);         dispatch_sync(mframedeliverystatequeue, ^{             mlastframedelivered++;             mdeliveredframe = frame;         });          if (!cvdisplaylinkisrunning(mdisplaylink))         {             cvdisplaylinkstart(mdisplaylink);         }     }); }  - (dispatch_data_t)getframefordisplay {     __block dispatch_data_t framedata = nil;     dispatch_sync(mframedeliverystatequeue, ^{         if (mlastframedelivered > mlastframedisplayed)         {             framedata = mdeliveredframe;             mdeliveredframe = nil;             mlastframedisplayed = mlastframedelivered;         }     });      // @ point, i've either got next frame or dont...     // resume delivery queue deliver next frame     if (framedata)     {         dispatch_resume(mframedeliveryqueue);     }      return framedata; }  @end  static void dieonerror(int error) {     if (error)     {         nslog(@"error in %s: %s", __pretty_function__, strerror(error));         exit(error);     } }  static nsstring* nsstringfromdispatchdata(dispatch_data_t data) {     if (!data || !dispatch_data_get_size(data))         return @"";      const char* buf = null;     size_t size = 0;     dispatch_data_t notused = dispatch_data_create_map(data, (const void**)&buf, &size); #pragma unused(notused)     nsstring* str = [[nsstring alloc] initwithbytes: buf length: size encoding: nsutf8stringencoding];     return str; }  // peel off frame if there one, , put left-overs back. static dispatch_data_t framedatafromaccumulator(dispatch_data_t* accumulator) {     __block dispatch_data_t framedata = dispatch_data_create(null, 0, null, null); // empty     __block dispatch_data_t leftover = dispatch_data_create(null, 0, null, null); // empty      __block bool didfindframe = no;     dispatch_data_apply(*accumulator, ^bool(dispatch_data_t region, size_t offset, const void *buffer, size_t size) {         ssize_t newline = -1;         (size_t = 0; !didfindframe && < size; ++i)         {             if (((const char *)buffer)[i] == '\n')             {                 newline = i;                 break;             }         }          if (newline == -1)         {             if (!didfindframe)             {                 framedata = dispatch_data_create_concat(framedata, region);             }             else             {                 leftover = dispatch_data_create_concat(leftover, region);             }         }         else if (newline >= 0)         {             didfindframe = yes;             framedata = dispatch_data_create_concat(framedata, dispatch_data_create_subrange(region, 0, newline + 1));             leftover = dispatch_data_create_concat(leftover, dispatch_data_create_subrange(region, newline + 1, size - newline - 1));         }          return true;     });      *accumulator = leftover;      return didfindframe ? framedata : nil; }  static cvreturn mydisplaylinkcallback(cvdisplaylinkref displaylink, const cvtimestamp* now, const cvtimestamp* outputtime, cvoptionflags flagsin, cvoptionflags* flagsout, void* displaylinkcontext) {     soappdelegate* self = (__bridge soappdelegate*)displaylinkcontext;      dispatch_data_t framedata = [self getframefordisplay];      nsstring* dataasstring = nsstringfromdispatchdata(framedata);      if (dataasstring.length == 0)     {         nslog(@"dropped frame...");     }     else     {         nslog(@"drawing frame in cvdisplaylink. contents: %@", dataasstring);     }      return kcvreturnsuccess; } 

in theory, gcd supposed balance these queues you. instance, if allowing "producer" queue proceed causing memory usage go up, gcd (in theory) start letting other queues go, , hold producer queue. in practice, mechanism opaque us, knows how work under real-world circumstances, in face of real-time restrictions.

if specific thing here unclear, please post comment, , i'll try elaborate.


Comments

Popular posts from this blog

Detect support for Shoutcast ICY MP3 without navigator.userAgent in Firefox? -

web - SVG not rendering properly in Firefox -

java - JavaFX 2 slider labelFormatter not being used -