Thanks to visit codestin.com
Credit goes to github.com

Skip to content

[video_player_avfoundation] enable more than 30 fps #7466

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 13 commits into from
Apr 17, 2025

Conversation

misos1
Copy link
Contributor

@misos1 misos1 commented Aug 21, 2024

In the player there was hardcoded 30 fps when setting up video composition. Now it uses timing from source track and also fallback minFrameDuration as seems frameDuration must be always set to something and it takes over in some situations as is mentioned in documentation about sourceTrackIDForFrameTiming. Also video composition setup is skipped when it is not needed when preferredTransform is identity.

Function updatePlayingState is called often right after setupEventSinkIfReadyToPlay but seems it was forgotten in onListenWithArguments and also it cannot be called in that way because setupEventSinkIfReadyToPlay may finish asynchronously when called again from line [self performSelector:_cmd onThread:NSThread.mainThread withObject:self waitUntilDone:NO] so now is updatePlayingState called right after _isInitialized = YES which is what it needs to even do something.

There was one more obstacle for playing 60 fps videos on 60 hz screen. Seems there are at least two "display links" at play when playing video, one calls function displayLinkFired and other one from flutter engine calls copyPixelBuffer but only when textureFrameAvailable was called previously. But the order in which those two are called is undefined so 16 ms after displayLinkFired may be called copyPixelBuffer and right after that displayLinkFired and so on. But copyPixelBuffer steals the newest pixel buffer from video player output and in displayLinkFired hasNewPixelBufferForItemTime will not report another pixel buffer for time close to that. Then the next frame is not called copyPixelBuffer because textureFrameAvailable was not called and in this way it skips every second frame so it plays video at 30 fps. There was also a synchronization problem with lastKnownAvailableTime. Now pixel buffers are produced and reported just on a single place in displayLinkFired and received with correct synchronization in copyPixelBuffer. Ideally there would be just a single "display link" from flutter engine if it supported also pulling frames instead of only pushing (which is good enough for a camera where the system is pushing frames to us, but from player video output is needed to pull frames). Calling textureFrameAvailable every frame could accomplish that but it looks like this line in flutter engine is calling even when copyPixelBuffer returns NULL and it may be expensive (although there is no need to call it in such case):

sk_sp<flutter::DlImage> image = [self wrapExternalPixelBuffer:_lastPixelBuffer context:context];

Seems there is some bug with the video player using this flutter engine on macos. Looks like the video is playing normally but then it starts "tearing", it looks like it is displaying frames normally but once in a while it shows some frame from the past like some previously cached frame. This is happening on the main branch but rendering on 60 fps exaggerates it (it is not caused by this PR).

Pre-launch Checklist

If you need help, consider asking for advice on the #hackers-new channel on Discord.

@@ -21,27 +21,38 @@ @interface FVPFrameUpdater : NSObject
@property(nonatomic, weak, readonly) NSObject<FlutterTextureRegistry> *registry;
// The output that this updater is managing.
@property(nonatomic, weak) AVPlayerItemVideoOutput *videoOutput;
// The last time that has been validated as avaliable according to hasNewPixelBufferForItemTime:.
@property(nonatomic, assign) CMTime lastKnownAvailableTime;
@property(nonatomic) CVPixelBufferRef latestPixelBuffer;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All properties need comments, per the style guide. Please explain in comments what the purpose of these two new properties is.

if (self.latestPixelBuffer) {
CFRelease(self.latestPixelBuffer);
}
self.latestPixelBuffer = [self.videoOutput copyPixelBufferForItemTime:outputItemTime
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is essentially doubling the memory usage for video output, isn't it? Why doesn't the previous approach of only storing the timestamp work? The PR description discusses the early-consume problem, but it seems like that could be addressed simply by changing copyPixelBuffer to prefer the last time instead of the current time.

Copy link
Contributor Author

@misos1 misos1 Aug 23, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is essentially doubling the memory usage for video output, isn't it?

Can you please explain how? Maybe memory usage with the original approach would be one frame less if flutter engine deleted its previous pixel buffer right before calling copyPixelBuffer but it would not want to do that because copyPixelBuffer can also return NULL and then it needs something latest to show.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you please explain how?

Aren't you keeping an extra copy of the frame besides the one kept by the player and the one kept by the engine?

I guess not doubled, but increasing by one frame relative to the current implementation.

(Also looking again, the current PR code appears to be leaking every frame it consumes.)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(Also looking again, the current PR code appears to be leaking every frame it consumes.)

I cannot see it.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, I see now. The memory flow is pretty hard to follow here on the copied buffer as currently written, with the buffer sometimes freed by the frame updater, and sometimes handed off to to the engine.

Which brings us back to the initial question: why do we need to copy the buffer proactively when the display link fires instead of storing the timestamp?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Aren't you keeping an extra copy of the frame besides the one kept by the player and the one kept by the engine?

Both versions have a worst case number of pixel buffers 2+N where N is the number held by the player. This case is between copyPixelBufferForItemTime and until the engine replaces its own after copyPixelBuffer. Current version just can have 2+N for a little longer, especially when displayLinkFired is called after copyPixelBuffer at each frame. Btw if the player kept only a single frame then storing timestamp instead of buffer would not work.

Which brings us back to the initial question: why do we need to copy the buffer proactively when the display link fires instead of storing the timestamp?

That would mean fetching pixel buffers from the past, especially when displayLinkFired is called after copyPixelBuffer at each frame. But this is based on undocumented and possibly wrong assumption that the player always leaves at least one past frame ready for us. What if this is not the case? Actually it is not. Seems the player periodically flushes all past pixel buffers. After there are some 3 pixel buffers in the past then the player flushes them all.

I tried an implementation which was sending timestamps instead of pixel buffers and copyPixelBufferForItemTime often returned NULL with timestamp for which hasNewPixelBufferForItemTime returned true before. It had 4x more frame drops in average during my tests compared to little modified current implementation (with copyPixelBufferForItemTime moved outside of pixelBufferSynchronizationQueue).

There are several causes of frame drops. This modified implementation minimises cases where copyPixelBufferForItemTime returns NULL and accidentally also frame drops caused by artefacts of using two "display links". They are caused by displayLinkFired and copyPixelBuffer changing order. Because things running on pixelBufferSynchronizationQueue are short (in time) and new pixel buffer is generated after some time then even when is copyPixelBuffer called after displayLinkFired at some frame (while before it was conversely) it has chance to obtain pixel buffer from displayLinkFired from previous frame and to not clear _textureFrameAvailable from this latest displayLinkFired. But this is of course not ideal, a more proper way would be to wait until the middle of frame and then send a new pixel buffer and call textureFrameAvailable but even more proper solution would be to not use two "display links".

Another cause of frame drops is when hasNewPixelBufferForItemTime returns false which is now prevalent. Seems this is caused by irregularities at which is called displayLinkFired which causes irregular timestamps returned by CACurrentMediaTime. I tested to use CADisplayLink::timestamp instead and frame drops dropped almost to zero, below 0.1%, around 20x less than current implementation (modified) on average during my tests. But this would need access to CADisplayLink through FVPDisplayLink and some other implementation for macos.

I also tested an implementation without a display link where textureFrameAvailable and copyPixelBuffer were called for each frame and it had around 3x less frame drops than the current implementation (modified). Unfortunately I could not use CADisplayLink::timestamp here so there were still frame drops due to irregularities. Flutter engine would need to provide something similar, or maybe it can be simply obtained by rounding CACurrentMediaTime down to refresh duration of display but that would then require update frequency of display from engine (or something what is doing FVPDisplayLink for macos).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That would mean fetching pixel buffers from the past, especially when displayLinkFired is called after copyPixelBuffer at each frame. But this is based on undocumented and possibly wrong assumption that the player always leaves at least one past frame ready for us. What if this is not the case? Actually it is not. Seems the player periodically flushes all past pixel buffers. After there are some 3 pixel buffers in the past then the player flushes them all.

I tried an implementation which was sending timestamps instead of pixel buffers and copyPixelBufferForItemTime often returned NULL with timestamp for which hasNewPixelBufferForItemTime returned true before. It had 4x more frame drops in average during my tests compared to little modified current implementation

Very interesting, thanks for the details! That definitely seems worth the slight memory hit.

Another cause of frame drops is when hasNewPixelBufferForItemTime returns false which is now prevalent.

Could you file an issue with the details of what you've found here for us to follow up on in the future? It sounds like you've done a lot of great investigation here that we should be sure to capture and track.

@stuartmorgan-g
Copy link
Contributor

@misos1 Are you still planning on addressing the remaining comments?

@misos1
Copy link
Contributor Author

misos1 commented Sep 17, 2024

@stuartmorgan Yes, I was waiting for your input as I thought you wanted to handle when minFrameDuration is kCMTimeInvalid differently. I thought of setting frameDuration to some constant in such a case but also emit some warning log message like I saw sometimes a flutter engine does but I did not find any way how that is normally done in plugins.

@stuartmorgan-g
Copy link
Contributor

Sorry, I didn't realize that was waiting for my input. I'll respond there.

@end

@implementation FVPFrameUpdater
- (FVPFrameUpdater *)initWithRegistry:(NSObject<FlutterTextureRegistry> *)registry {
NSAssert(self, @"super init cannot be nil");
if (self == nil) return nil;
_registry = registry;
_lastKnownAvailableTime = kCMTimeInvalid;
return self;
}

- (void)displayLinkFired {
// Only report a new frame if one is actually available.
CMTime outputItemTime = [self.videoOutput itemTimeForHostTime:CACurrentMediaTime()];
if ([self.videoOutput hasNewPixelBufferForItemTime:outputItemTime]) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hm, shouldn't these two lines be inside the dispatch_async? AVPlayerItemVideoOutput doesn't seem to be marked as threadsafe.

@misos1
Copy link
Contributor Author

misos1 commented Sep 26, 2024

Seems there is some bug with the video player using this flutter engine on macos. Looks like the video is playing normally but then it starts "tearing", it looks like it is displaying frames normally but once in a while it shows some frame from the past like some previously cached frame. This is happening on the main branch but rendering on 60 fps exaggerates it (it is not caused by this PR).

Seems this is caused by a bug in the flutter engine. There is this "You need to maintain a strong reference to textureOut until the GPU finishes execution of commands accessing the texture, because the system doesn’t automatically retain it.". But here is textureOut released right after CVMetalTextureCacheCreateTextureFromImage:

https://github.com/flutter/engine/blob/6f802b39ab0669eb6ba3272dff1d34e85febeb77/shell/platform/darwin/graphics/FlutterDarwinExternalTextureMetal.mm#L233-L249
https://github.com/flutter/engine/blob/6f802b39ab0669eb6ba3272dff1d34e85febeb77/shell/platform/darwin/graphics/FlutterDarwinExternalTextureMetal.mm#L177-L197

I thought that it flashes video frames from the past but I looked closely at video recorded from the screen and for example at one moment for 1/60 of a second it shows a frame from video 12 frames into the future. Maybe when the underlying textureOut is released this memory may be reused and overwritten by AVPlayer for decoding following video frames.

This does not explain so well another thing which I noticed. If copyPixelBuffer returns for some frame NULL then the engine shows a transparent image. But I will assume this is also caused by that use after free bug.

There is also this "Note that Core Video doesn’t explicitly declare any pixel format types as Metal compatible. Specify true for the kCVPixelBufferMetalCompatibilityKey option to create Metal-compatible buffers when creating or requesting Core Video pixel buffers.". Maybe the player should also specify this in pixBuffAttributes when creating AVPlayerItemVideoOutput?

@stuartmorgan-g
Copy link
Contributor

stuartmorgan-g commented Sep 26, 2024

Seems this is caused by a bug in the flutter engine.

Please definitely file an issue with details if you haven't already!

Edited to add: You can cross-reference the engine issue with flutter/flutter#135999, which sounds like the macOS playback issue you are describing here if I'm understanding correctly.

@stuartmorgan-g
Copy link
Contributor

There is also this "Note that Core Video doesn’t explicitly declare any pixel format types as Metal compatible. Specify true for the kCVPixelBufferMetalCompatibilityKey option to create Metal-compatible buffers when creating or requesting Core Video pixel buffers.". Maybe the player should also specify this in pixBuffAttributes when creating AVPlayerItemVideoOutput?

This would probably be a question for the #hackers-engine channel; I'm not familiar with the current end-to-end pipeline requirements for the engine texture rendering path.

@stuartmorgan-g
Copy link
Contributor

In terms of moving this forward, from my perspective:

  • Structurally everything here makes sense to me at this point; thanks again for the clear explanations!
  • I don't consider the macOS issue blocking as it's an existing issue. If we get feedback that the experience is substantially worse we can always do a follow-up to disable just >30fps for macOS, while leaving all the other improvements (e.g., threading model fixes) in place.

So once the smaller feedback items still open are addressed in an an updated version of the PR, I can re-review, and I expect we'll be on track for getting this landed.

Does that sound right?

@misos1
Copy link
Contributor Author

misos1 commented Sep 26, 2024

If we get feedback that the experience is substantially worse we can always do a follow-up to disable just >30fps for macOS, while leaving all the other improvements (e.g., threading model fixes) in place.

Yes, video composition can be set every time even with affinity and fps forced to 30 but it still happens. I wonder why no one reported this. And there are other things which may seemingly help little with it like retaining pixel buffer by frame updater and creating fresh copy of pixel buffer (I first though that AVPlayer at some point rewrites pixel buffers already returned by copyPixelBufferForItemTime but that is not the case). At least these worked on my specific system, but there is nothing for sure if there is UB, someone may experience it even worse with a 30 fps version.

@stuartmorgan-g
Copy link
Contributor

I wonder why no one reported this.

I think they have, per my edit to this comment above. Unless that's not the same behavior you're describing?

And there are other things which may seemingly help little with it like retaining pixel buffer by frame updater and creating fresh copy of pixel buffer

If we have lifetime issues within the engine pipeline itself, that's definitely something we should fix in the engine rather than try to hack around at the video_player level.

@misos1
Copy link
Contributor Author

misos1 commented Sep 26, 2024

I think they have, per my edit to this comment above. Unless that's not the same behavior you're describing?

Maybe partially, as I wrote, if copyPixelBuffer returns for some frame NULL then the engine shows a transparent image meaning it will flicker into background. I did not experience this on ios although it seems they share the same code so UB should be also on ios.

@misos1
Copy link
Contributor Author

misos1 commented Sep 29, 2024

Would it be doable to add new capability to the flutter engine and depend on it in this package or it needs to also work with the engine prior to such change?

@stuartmorgan-g
Copy link
Contributor

Whether we try to work around engine bugs at the plugin level is something we decide on a case-by-cases basis. If the macOS engine texture pipeline is buggy, I would not attempt to work around that at the plugin level without a very compelling reason to do so.

@misos1
Copy link
Contributor Author

misos1 commented Sep 30, 2024

No I do not mean this, rather for the engine being able to pull pixel buffers rather than needing textureFrameAvailable. Something like a flag when the engine will call copyPixelBuffer for every frame, if it returns NULL it will do nothing, if it returns pixel buffer it will update internal buffers.

@stuartmorgan-g
Copy link
Contributor

I'm not really sure what you mean by "for every frame", but if you want to propose a new engine API the place to start would be a design document. Details of exactly when video_player would adopt a new proposed API would be figured out much later in the process.

@misos1
Copy link
Contributor Author

misos1 commented Sep 30, 2024

By "for every frame" I mean that it would be called as if _textureFrameAvailable was always true. _textureFrameAvailable is set to true by calling textureFrameAvailable and set to false soon after is copyPixelBuffer called.

@misos1
Copy link
Contributor Author

misos1 commented Oct 10, 2024

Regarding #7466 (comment), it seems that macOS actually uses some different code path than ios but here it is the same, cvMetalTexture is released right after CVMetalTextureCacheCreateTextureFromImage:

https://github.com/flutter/engine/blob/6f802b39ab0669eb6ba3272dff1d34e85febeb77/shell/platform/darwin/macos/framework/Source/FlutterExternalTexture.mm#L119-L135
https://github.com/flutter/engine/blob/6f802b39ab0669eb6ba3272dff1d34e85febeb77/shell/platform/darwin/macos/framework/Source/FlutterExternalTexture.mm#L77-L97

For both it seems like a change here was some 4 years ago so I am not sure why such similar things were done twice.

And this is eventually called from embedder_external_texture_metal.mm which handles things differently, it does not use flag like _textureFrameAvailable like is used in FlutterDarwinExternalTextureMetal.mm but instead it nullifies last stored image when is called textureFrameAvailable so if copyPixelBuffer returns NULL it will not show anything and will periodically call it until it returns non-NULL and this explains what I observed.

I am not sure if this is intended or not but FlutterTexture does not tell anything about whether copyPixelBuffer can or cannot return NULL or what should happen in such a case:

/**
 * Copy the contents of the texture into a `CVPixelBuffer`.
 *
 * The type of the pixel buffer is one of the following:
 * - `kCVPixelFormatType_32BGRA`
 * - `kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange`
 * - `kCVPixelFormatType_420YpCbCr8BiPlanarFullRange`
 */
- (CVPixelBufferRef _Nullable)copyPixelBuffer;

So I will assume that returning NULL from copyPixelBuffer after previously returning non-NULL is undefined (it is practically unavoidable to return NULL at least once at start because the engine will call it until it returns non-NULL even without calling textureFrameAvailable). Then both old and current implementation are not entirely correct because they can return NULL from copyPixelBuffer because copyPixelBufferForItemTime can return NULL anytime so player needs to remember last returned pixel buffer and if it does not have any new then return this remembered buffer again until there is some new.

Copy link
Contributor

@stuartmorgan-g stuartmorgan-g left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry for the delay, I needed a block of time where I could swap all the context back in here since the implementation has changed significantly since my last review.

// The display link that drives frameUpdater.
@property(nonatomic) FVPDisplayLink *displayLink;
// The time interval between screen refresh updates.
@property(nonatomic) _Atomic CFTimeInterval duration;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this using stdatomic instead of just making the property atomic? A nonatomic _Atomic property seems needlessly confusing.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it is not documented whether atomic property with primitive type can be lock free so it looked like overkill. Although there are some hints that it probably can be. So probably there is no problem to change it to atomic property.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I deem std atomics as readable enough and there is this nice choice of memory ordering but probably yes atomic property would make for simpler code even with that potential little overhead (it is unknown to me what ordering it uses in case it is lock free).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I deem std atomics as readable enough

You are not the only person who will be reading and maintaining this code. Obj-C developers are overwhelmingly going to be more familiar with atomic than _Atomic (in fact, I don't think I've ever seen _Atomic in Obj-C code in literally decades of Obj-C development).

Unless you have benchmark data showing that atomic is an issue in practice, please use it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You are not the only person who will be reading and maintaining this code.

I did not want to imply that. It is part of a sentence which concludes that atomic property will probably have better tradeoff regarding simplicity (readability) vs performance (meaning better choice).

[_registry textureFrameAvailable:_textureId];
}
// Display link duration is in an undefined state until displayLinkFired is called at least once
// so it should not be used directly.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems like it belongs on the ivar declaration rather than here.

} else {
NSLog(@"Warning: videoTrack.minFrameDuration for input video is invalid, please report this to "
@"https://github.com/flutter/flutter/issues with input video attached.");
videoComposition.frameDuration = CMTimeMake(1, 120);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why 120 fps rather than 30 as the fallback?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And why not 60? There is no good universal value. I think some displays have 120 Hz and this should not be ever used anyway.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And why not 60?

Because I would like the fallback to be conservative, and I think 30 is a reasonable baseline value for video. Which is why I proposed it at the beginning of the review.

this should not be ever used anyway

If you believe this code path will never be used, I'm not sure why you have pushed back repeatedly against my suggestions for how we handle it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hardcoded constant of 30 fps was what led to this PR in the first place. With 30 the worst case would be worse playback for videos with more fps, while something higher is more inclusive and worst case could be maybe performance related but I did not observe that even when I set 1000 or 1000000 here and it never produces more frames than are in source video. Actually my constant is conservative, it does not account for slow motion videos (in case it has actual frame rate for example 240 fps and not just 30 fps but it is slower, it can have 240 fps with instruction to play at 0.125 rate, in such case frameDuration need to be CMTimeMake(1, 240) and with 120 it would play at 15 fps). Also I think more "standard" is actually 24 fps (devised some 100 years ago), not 30, and 120 is divisible by 24.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hardcoded constant of 30 fps was what led to this PR in the first place.

And this PR changes the use of the hard-coded constant from "always" to "cases where the frame duration can't be computed, of which we currently have no known examples", which constitutes a substantial improvement.

With 30 the worst case would be worse playback for videos with more fps, while something higher is more inclusive and worst case could be maybe performance related

Yes, I am fully aware of that. I would prefer that cases we don't have a good understanding of and are actively reporting via logging as not handled correctly err on the side of worse playback rather than app performance problems.

For instance, we know from issue reports that people use this package to play audio files even though we do not recommend that, and I would rather that have worse (imaginary) video playback than performance problems if that's a case that can trigger this fallback.

Also I think more "standard" is actually 24 fps (devised some 100 years ago), not 30, and 120 is divisible by 24.

For a fallback where we have no information about the video's framerate, I am more concerned with screen rate than film standards. Many common phone screen refresh rates are not divisible by 24.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I do not know whether it will be ever used or not.

It is unconstructive to repeatedly assert that this case will "not be ever used" and is "imaginary" as a way to dismiss my position, and then argue the reverse when it comes to your position.

If you believe this cannot happen, then there is literally no downside to just making the change that I am asking for; if you do believe could potentially happen, then please stop asserting that it can't.

As I wrote, 120 seems to me a better choice than 30.

Yes, I understand that, I just don't agree. I do not find the arguments you've presented compelling enough to warrant approving a behavioral change (relative to the version that is already in production) for a case we have no actual examples of, and thus no ability to evaluate concretely.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When I mentioned "imaginary" I was referring to your "...have worse (imaginary) video..." before and it was just a reaction to what you wrote. I wrote "this should not be ever used" not "(will) not be ever used" and this does not mean that I "believe" it will certainly never happen (in such case I would write "this will not be ever used" as if it was literally dead code). I can think that chance is small but this is really unknown until I can test it on every existing video in the world.

So if I understand correctly your arguments are:

  1. Better to use something that was here before as fallback due to behavioral change.
  2. It should be "harmonic" with refresh rate, so nothing like 24 fps, to avoid resulting artifacts and worse playback.
  3. It should be something high enough as to not cause worse playback than 30 fps would, so nothing less than 30 like 15 or 1.
  4. It should not be too high due to performance concerns, even 60 is too much.

I would at least reconsider to use 60 fps here as it covers both 30 and 60 fps videos and although 30 fps is probably still more used, both are very popular and 60 just covers both of them. And even if setting 60 here would mean that video composition would try to check 2x more often whether there is new frame, consider that even with 30 here player plugin itself already needs to check for new frame at display refresh rate which is at least 60 Hz (seems there is possibility for even 30 Hz and lower but it is opt-in in CADisplayLink).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When I mentioned "imaginary" I was referring to your "...have worse (imaginary) video..." before and it was just a reaction to what you wrote.

The text of mine you are quoting is from a paragraph that was explicitly about audio files, where there is no video. Thus, the video playback in question would be imaginary.

Your use of "imaginary" was in reply to a later paragraph where I was talking about "fallback where we have no information about the video's framerate"; i.e., every case that will hit this line of code.

Those aren't the same cases, and it's important to distinguish between the specific case of audio files, and the general case (which we don't have a clear enumeration of) where we get no framerate information.

I would at least reconsider to use 60 fps here

That does not satisfy point 1. If you can provide a concrete example of a video file that falls into this case, I'm happy to consider changing the value away from what we previously had since we will have at least one data point for what the impact of that change will be. If not, I don't want to make speculative changes about cases we don't have any information about.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right, sorry; I was misremembering the logic used to set videoComposition.frameDuration logic as feeding into the display link.

So maybe this is the reason for your performance concerns? Now as frameDuration is not used for display link frequency, does this change the whole situation? To recapitulate as I wrote earlier, video output never gives more pixel buffers per second than is fps of video even with values like 1000 or million and I did not notice anything about performance with such values. Also when sourceTrackIDForFrameTiming is used then timing is mainly driven by track which even further minimizes possible (if any) negative impact of frameDuration. So there are several indices that indicate that there should not be any performance concerns.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So maybe this is the reason for your performance concerns? Now as frameDuration is not used for display link frequency, does this change the whole situation?

That certainly reduces my concerns somewhat. However:

video output never gives more pixel buffers per second than is fps of video

What exactly does that mean in the context of a video where frameDuration is not set?

even with values like 1000 or million and I did not notice anything about performance with such values

Did you test with any videos where frameDuration is not set, which is the only case where this code would come into play?

I continue not to see a compelling argument in favor of changing the behavior of a case we do not understand, and have no examples of. By definition, any such change is purely speculative and cannot be validated.

// outside of which targetTime is reset should be narrow enough to make possible lag as small as
// possible and at the same time wide enough to avoid too frequent resets which would lead to
// irregular sampling. Ideally there would be a targetTimestamp of display link used by flutter
// engine (FlutterTexture can provide timestamp and duration or timestamp and targetTimestamp).
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the paranthetical is describing a desired feature? As worded it sounds like it's describing something that currently exists, which is confusing. I would replace this entire sentence with a TODO referencing an issue that requests the feature in detail, so that it's clearer what the context is and where the feature is tracked.

}

// Better to avoid returning NULL as it is unspecified what should be displayed in such a case.
return CVBufferRetain(self.latestPixelBuffer);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And then here: // Add a retain for the engine, since the `copyPixelBufferForItemTime:` has already been accounted for, and the engine expects an owning reference.

if (CVDisplayLinkGetCurrentTime(self.displayLink, &timestamp) != kCVReturnSuccess) {
return 0;
}
return 1.0 * timestamp.videoRefreshPeriod / timestamp.videoTimeScale;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is the 1.0 * for? If it's to make this a double, just cast to a double.

});
}

// Better to avoid returning NULL as it is unspecified what should be displayed in such a case.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it would be better to comment about this on the declaration of latestPixelBuffer rather than here. Something like "The last buffer returned in copyPixelBuffer. This is stored so that in can be returned again if nothing new is available from the video buffer, since the engine has undefined behavior when returning NULL."

@stuartmorgan-g stuartmorgan-g added the triage-ios Should be looked at in iOS triage label Nov 8, 2024
@@ -125,6 +125,7 @@ @interface StubFVPDisplayLinkFactory : NSObject <FVPDisplayLinkFactory>

/** This display link to return. */
@property(nonatomic, strong) FVPDisplayLink *displayLink;
@property(nonatomic) void (^fireDisplayLink)(void);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: (nonatomic, copy)

@property(nonatomic, assign) CMTime lastKnownAvailableTime;
// The display link that drives frameUpdater.
@property(nonatomic) FVPDisplayLink *displayLink;
// The time interval between screen refresh updates.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: Can you call it something like frameDuration or frameDelta or displayLinkDuration

@@ -543,16 +562,25 @@ - (void)setPlaybackSpeed:(double)speed {
}

- (CVPixelBufferRef)copyPixelBuffer {
// Ensure video sampling at regular intervals. This function is not called at exact time intervals
// so CACurrentMediaTime returns irregular timestamps which causes missed video frames. The range
// outside of which targetTime is reset should be narrow enough to make possible lag as small as
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: make lags (due to skipping frames?) as less frequent as possible.

Copy link
Contributor Author

@misos1 misos1 Feb 2, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"Lag" here means that targetTime is lagging behind current time or conversely.

// so CACurrentMediaTime returns irregular timestamps which causes missed video frames. The range
// outside of which targetTime is reset should be narrow enough to make possible lag as small as
// possible and at the same time wide enough to avoid too frequent resets which would lead to
// irregular sampling. Ideally there would be a targetTimestamp of display link used by flutter
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you add more explanation on how it leads to irregular sampling?

Can we do an experiment to always reset the time (remove the if check below) and see how it performs on a sample video?

Copy link
Contributor Author

@misos1 misos1 Nov 18, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you add more explanation on how it leads to irregular sampling?

Because each "reset" (meaning if is true) changes targetTime to a different value than it had. Adding the same number (self.frameUpdater.duration) results in regular (enough) time intervals in targetTime but suddenly changing it to value "outside" of that breaks that regularity. In the worst case when it changes always it is like directly using CACurrentMediaTime.

Can we do an experiment to always reset the time (remove the if check below) and see how it performs on a sample video?

Yes I did that of course. I also calculated standard deviation and tried to compute the probability of frame drop, result was about 10x higher than I observed but I probably did something wrong in this calculation (it does not really matter). And also distribution here is not normal so it may not match calculations assuming normal distribution. Here black is the distribution of differences between results of CACurrentMediaTime in consecutive copyPixelBuffer calls (on my ios device) and green is normal distribution (with the same standard deviation). Horizontal axis goes from around 8 to 24 ms with 1/60 s in middle and vertical axis is relative number of occurrences (this graph shows that there should be less drops than with normal distribution so I am satisfied enough with it as explanation why my calculated number was higher):

Screenshot 2024-11-18 at 18 45 33

@cbracken
Copy link
Member

@misos1 when you get a chance, please review @stuartmorgan's latest round of comments. Thanks for your contribution.

// some other alternative, instead of on demand by calling textureFrameAvailable.
if (self.displayLink.running) {
dispatch_async(dispatch_get_main_queue(), ^{
[self.frameUpdater.registry textureFrameAvailable:self.frameUpdater.textureId];
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I missed this change in the last review. So this version just constantly calls textureFrameAvailable:, as fast as possible, unconditionally, while the video is playing? Doesn't that just completely defeat the purpose of having the display link at all?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Display link is still usable for obtaining actual duration and for starting after pause. I also tried to call it conditionally using hasNewPixelBufferForItemTime with targetTime + duration (another +duration into future) but it did not work well for below 60 fps videos, there is that race problem even when just some video frames depend on textureFrameAvailable from display link.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Display link is still usable for obtaining actual duration and for starting after pause.

Neither of those things require a regularly firing display link; one is just a getter, and the other is a one-time call. This version appears to have two sets of regular calls: the display link callback, on a set cadence that we determine based on the video, and then this, which is as fast as possible no matter what the refresh rate, frame rate, or elapsed time are.

I understand the goals of the previous iterations of this PR, but I don't understand unconditionally driving buffer copies as fast as possible no matter what, or constantly telling the engine that there are new frames available regardless of whether or not that's true.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

on a set cadence that we determine based on the video

It is actually based on display refresh rate.

which is as fast as possible no matter what the refresh rate, frame rate, or elapsed time are

This too, it is not faster than display refresh rate, it is at display refresh rate.

Copy link
Contributor

@stuartmorgan-g stuartmorgan-g Nov 25, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is actually based on display refresh rate.

Right, sorry; I was misremembering the logic used to set videoComposition.frameDuration logic as feeding into the display link.

This too, it is not faster than display refresh rate, it is at display refresh rate.

How is an unconditional and immediate call every time a frame is provided the same as the refresh rate?

Copy link
Contributor Author

@misos1 misos1 Dec 5, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I did not observe such high deviations. But much smaller deviations (than screen refresh period) are enough to give different frames by using CACurrentMediaTime vs something more regular. All calls to CACurrentMediaTime may return timestamp strictly between timestamps of current and next screen refresh but time points of frames in video are shifted randomly to this (and this shift may change in time). Here is example of irregular sampling (like with CACurrentMediaTime) where video frame 2 was shown twice while frame 3 was dropped in contrast with regular sampling (like with targetTime in this PR):

screen refresh number:   -1---|---2---|---3---|---4---|---
irregular sampling:       v        v    v           v
video frame number:      |---1---|---2---|---3---|---4---|
regular sampling:          ^       ^       ^       ^

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, that's extremely helpful! I understand the issue much more clearly now. It would be a great idea to put that ASCII diagram into the comment in the code.

I think the next step is to file an issue to explore with the engine team whether making the copyPixelBuffer/textureFrameAvailable more explicit about call patterns (e.g., saying that calls will not be more frequent than ~the screen refresh rate) is something the engine team is comfortable with, so we know whether or not we need to build limiting logic at the plugin level.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So from flutter/flutter#160520 (comment) "We could make the behavior more documented/explicit but we may still need to change things in the future." should I take that as yes or do I need to implement that frame limiting? What is the next step?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The "may still need to change things in the future" means we shouldn't rely on the current behavior, so we should do plugin-level frame limiting.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh ok, I understood that as "ok, but in future switch to platform views".

@misos1
Copy link
Contributor Author

misos1 commented Feb 2, 2025

I suppose I have to merge the latest upstream for these failing checks to pass? "warning - lib/src/common/package_looping_command.dart:343:30 - The receiver can't be null"

Copy link
Contributor

@hellohuanlin hellohuanlin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! The review comments are very helpful for me to understand!

@jmagman
Copy link
Member

jmagman commented Feb 12, 2025

@misos1 the analyzer warnings look unrelated to your change (warnings are related to turning on the analyzer for another part of the code). Could you rebase/merge onto master, and resolve the merge conflicts?

@stuartmorgan-g
Copy link
Contributor

In case this was waiting on me, the changes here LGTM. I was just waiting until the merge to pass CI has happened for final review+approval, since that merge will need a review.

@jmagman
Copy link
Member

jmagman commented Mar 31, 2025

@misos1 the analyzer warnings look unrelated to your change (warnings are related to turning on the analyzer for another part of the code). Could you rebase/merge onto master, and resolve the merge conflicts?

Hi @misos1, friendly ping that this would still need a rebase to land (to get past the analyzer issues), and also re-update the CHANGELOG and pubspec. It's so close to landing! 🙂

Copy link
Contributor

@stuartmorgan-g stuartmorgan-g left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Sorry for the delay on the final approval! I didn't notice it was ready since I get a ton of notification emails that are just code pushes on PRs, and they usually aren't relevant.

Thanks again for all the iteration and deep investigation on this issue to get to something that is as robust as possible given the current engine constraints

@stuartmorgan-g stuartmorgan-g added the autosubmit Merge PR when tree becomes green via auto submit App label Apr 17, 2025
@auto-submit auto-submit bot merged commit 04de46e into flutter:main Apr 17, 2025
82 checks passed
engine-flutter-autoroll added a commit to engine-flutter-autoroll/flutter that referenced this pull request Apr 21, 2025
github-merge-queue bot pushed a commit to flutter/flutter that referenced this pull request Apr 21, 2025
flutter/packages@2fcc403...ac21f53

2025-04-20 [email protected] Roll Flutter from
3ed38e2 to cfb887c (17 revisions) (flutter/packages#9118)
2025-04-19 [email protected] [various] Scrubs pre-SDK-21 Android
code (flutter/packages#9112)
2025-04-18 [email protected] Roll Flutter from
ecabb1a to 3ed38e2 (23 revisions) (flutter/packages#9114)
2025-04-18 [email protected] [flutter_svg] feat: Expose the
`colorMapper` property in `SvgPicture` (flutter/packages#9043)
2025-04-18 [email protected] [tool] Add initial file-based command
skipping (flutter/packages#8928)
2025-04-18 [email protected] [pigeon] Convert test plugins to SPM
(flutter/packages#9105)
2025-04-18 [email protected]
[webview_flutter] Adds support to control overscrolling
(flutter/packages#8451)
2025-04-17 [email protected] [in_app_purchase] add
Storefront.countryCode() and AppStore.sync() (flutter/packages#8900)
2025-04-17 [email protected]
[webview_flutter_wkwebview] Expose the allowsLinkPreview property in
WKWebView for iOS (flutter/packages#5029)
2025-04-17 [email protected]
[webview_flutter_android][webview_flutter_wkwebview] Adds platform
implementations to set over-scroll mode (flutter/packages#9101)
2025-04-17 [email protected]
[shared_preferences] Update AGP to 8.9.1 (flutter/packages#9106)
2025-04-17 [email protected] [pigeon] Adds
Kotlin lint tests to example code and fix lints (flutter/packages#9034)
2025-04-17 [email protected]
[video_player_avfoundation] enable more than 30 fps
(flutter/packages#7466)
2025-04-17 [email protected] Roll Flutter from
aef4718 to ecabb1a (25 revisions) (flutter/packages#9104)
2025-04-16 [email protected] [pigeon] Unify iOS and macOS test
plugins (flutter/packages#9100)
2025-04-16 [email protected] Roll Flutter from
db68c95 to aef4718 (7 revisions) (flutter/packages#9098)
2025-04-16 [email protected]
[webview_flutter_platform_interface] Adds method to set overscroll mode
(flutter/packages#9099)
2025-04-16 [email protected] Update `CODEOWNERS`
(flutter/packages#8984)
2025-04-16 [email protected] [google_sign_is] Update iOS SDK to
8.0 (flutter/packages#9081)
2025-04-16 [email protected] [camera_avfoundation]
Implementation swift migration (flutter/packages#8988)
2025-04-16 [email protected] [go_router]
Adds `caseSensitive` to `GoRoute` (flutter/packages#8992)
2025-04-16 [email protected] Manual roll Flutter from
30e53b0 to db68c95 (98 revisions) (flutter/packages#9092)
2025-04-15 [email protected] [tool] Run config-only build for
iOS/macOS native-test (flutter/packages#9080)

If this roll has caused a breakage, revert this CL and stop the roller
using the controls here:
https://autoroll.skia.org/r/flutter-packages-flutter-autoroll
Please CC [email protected] on the revert to ensure that a
human
is aware of the problem.

To file a bug in Flutter:
https://github.com/flutter/flutter/issues/new/choose

To report a problem with the AutoRoller itself, please file a bug:
https://issues.skia.org/issues/new?component=1389291&template=1850622

Documentation for the AutoRoller is here:
https://skia.googlesource.com/buildbot/+doc/main/autoroll/README.md
CodixNinja pushed a commit to CodixNinja/flutter that referenced this pull request May 15, 2025
flutter/packages@2fcc403...ac21f53

2025-04-20 [email protected] Roll Flutter from
409a8ac to cd51fa3 (17 revisions) (flutter/packages#9118)
2025-04-19 [email protected] [various] Scrubs pre-SDK-21 Android
code (flutter/packages#9112)
2025-04-18 [email protected] Roll Flutter from
d0741df to 409a8ac (23 revisions) (flutter/packages#9114)
2025-04-18 [email protected] [flutter_svg] feat: Expose the
`colorMapper` property in `SvgPicture` (flutter/packages#9043)
2025-04-18 [email protected] [tool] Add initial file-based command
skipping (flutter/packages#8928)
2025-04-18 [email protected] [pigeon] Convert test plugins to SPM
(flutter/packages#9105)
2025-04-18 [email protected]
[webview_flutter] Adds support to control overscrolling
(flutter/packages#8451)
2025-04-17 [email protected] [in_app_purchase] add
Storefront.countryCode() and AppStore.sync() (flutter/packages#8900)
2025-04-17 [email protected]
[webview_flutter_wkwebview] Expose the allowsLinkPreview property in
WKWebView for iOS (flutter/packages#5029)
2025-04-17 [email protected]
[webview_flutter_android][webview_flutter_wkwebview] Adds platform
implementations to set over-scroll mode (flutter/packages#9101)
2025-04-17 [email protected]
[shared_preferences] Update AGP to 8.9.1 (flutter/packages#9106)
2025-04-17 [email protected] [pigeon] Adds
Kotlin lint tests to example code and fix lints (flutter/packages#9034)
2025-04-17 [email protected]
[video_player_avfoundation] enable more than 30 fps
(flutter/packages#7466)
2025-04-17 [email protected] Roll Flutter from
9616f9c to d0741df (25 revisions) (flutter/packages#9104)
2025-04-16 [email protected] [pigeon] Unify iOS and macOS test
plugins (flutter/packages#9100)
2025-04-16 [email protected] Roll Flutter from
a7ce7ff to 9616f9c (7 revisions) (flutter/packages#9098)
2025-04-16 [email protected]
[webview_flutter_platform_interface] Adds method to set overscroll mode
(flutter/packages#9099)
2025-04-16 [email protected] Update `CODEOWNERS`
(flutter/packages#8984)
2025-04-16 [email protected] [google_sign_is] Update iOS SDK to
8.0 (flutter/packages#9081)
2025-04-16 [email protected] [camera_avfoundation]
Implementation swift migration (flutter/packages#8988)
2025-04-16 [email protected] [go_router]
Adds `caseSensitive` to `GoRoute` (flutter/packages#8992)
2025-04-16 [email protected] Manual roll Flutter from
f2d54fd to a7ce7ff (98 revisions) (flutter/packages#9092)
2025-04-15 [email protected] [tool] Run config-only build for
iOS/macOS native-test (flutter/packages#9080)

If this roll has caused a breakage, revert this CL and stop the roller
using the controls here:
https://autoroll.skia.org/r/flutter-packages-flutter-autoroll
Please CC [email protected] on the revert to ensure that a
human
is aware of the problem.

To file a bug in Flutter:
https://github.com/flutter/flutter/issues/new/choose

To report a problem with the AutoRoller itself, please file a bug:
https://issues.skia.org/issues/new?component=1389291&template=1850622

Documentation for the AutoRoller is here:
https://skia.googlesource.com/buildbot/+doc/main/autoroll/README.md
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
autosubmit Merge PR when tree becomes green via auto submit App p: video_player platform-ios platform-macos triage-ios Should be looked at in iOS triage
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants