Participants: Philippe Le Hegaret, Gillis Dubuc, Peter Hedenskog, Scott Haseley, Alex Timin, Will Hawkins, Nicolás Peña, Nic Jansma, Subatra Ashe, Benjamin De Kosnik
December 5 @ 11am PST
Scott:
scheduler.postTask() API allows scheduling of prioritized tasks
… Implemented v0 task-queue based prototype in Chromium
… Changing to a Signal-based prototype in response to TAG feedback
… Planning to go to Origin Trial to evaluate performance and broader feedback on ergonomics
… Controller/Signal based design
… Examples of old vs. new - Promise is significantly simpler and less verbose
… What’s controller priority?
Yoav: So from a Priority Hints perspective, that would give us reprioritization?
Scott: Yes
Yoav: If I want to create a task that would propagate other signals to subtasks, this will enable that?
Scott: Yup. Also cases where you’d want the signal and priority to differ, so we want to pass along an explicit priority
… Signals are read-only - easy to pass and help to resolve the priority inversion problem
… Task/SubTask vs. TaskQueue model - Signals are better for Task/SubTask, and can be used to model task queues, even if it’s clunky
… Exploring options around signal propagation (getting current signal, inheriting, etc). This has some risk of misuse/footgun
… Still seeking feedback from other implementers
Will/Ben: Defer to Boris who has been tracking it closer
Ben: Interested teasing out the difference between Abort and Signal propagation
Safari/Edge not present
Nicolás: Chrome planning to ship buffering of LongTasks from start of page and available via PO
… PerformanceObserver.observe({ buffered:true })
… Earlier thought there would be overhead related to their computation, but then realized that they are being measured anyway?
… Is the same true for Firefox?
… If so, and we’re willing to move forward with it, what would be the initial buffer size?
… No data on per-frame long tasks in Chrome, but data on onload
… Time to load event 90p is less than 10 seconds, so at a maximum less than 200 LTs
… Does a buffer size of 200 sounds reasonable?
Nic: 200 seems reasonable. Akamai can look to see what the distribution is, but don’t have the initial LTs.
Gilles: Customers with LT buffering in head?
Nic: No major ones
Yoav: Maybe we could do some math based on the time in which the PO was registered as well as the number of LTs that happened after it.
Nic: No need to store the ones after PO
Yoav: True, but it can help us estimate
Gilles: p90 for load event is 20-30 seconds. Need to evaluate methodology
Nicolás: Working on gathering data but will take some time
Yoav: we could also change the buffer size afterwards, if it’s too small
Nicolás: Ideally, we would like to avoid it, but possible
Nic: 200 sounds like a decent number
Ben: calculation sounds simple and fine. Why is Gilles seeing 3x the numbers though?
Gilles: More users from slower countries? If we’d hit the limit often we’d let you know
Yoav: CrUX data?
Nicolás: Might not have onload
Nic: Akamai customer have a 95p long task number of ~30.
Gilles: So, I guess 200 sounds reasonable and we’ll let you know if we hit the limit
Ben: Sounds reasonable
Nicolás: Great, I’ll update the registry
https://github.com/whatwg/fetch/pull/955
Nicolás: Trying to integrate TAO into Fetch. Has a PR for it, but there were no tests and needed to file browser bugs for the changes
… 2 changes are:
… The goal is to align with CORS, but will result in changes to the processing of TAO, so heads-up on upcoming browser bugs
… Analytics providers can shout at us if the change will be breaking TAO
Nic: We will notice
Nicolás: main change is requiring a “*” for cross origin redirects
Nic: All I see is stars!
Nicolás: Great
Yoav: Motivation is alignment with CORS and have CORS imply TAO. Not aware of security issues that require those restrictions, but aligning with CORS will give us more content.
Ben: High level goals sound great, need to dig in to better understand.
Yoav: Also, in CORS “*” is simplifying the processing model as well as they don’t support a list of origins. For TAO, we just want to align unless it breaks something significant.