Participants

Todd Reifsteck, Ilya Grigorik, Markus Stange, Steve Souders, Charlie Vazac, Nic Jansma, Phil Walton, Nicolás Peña, Tim Dresser, Yoav Weiss

Scribe: Charlie Vazac

Chair: Yoav Weiss

Admin

Next call: Thu Jan 10th, 3p EST

Yoav: plan to experiment with auto-captioning the calls.

First step - record the calls! Are people ok with that?

Todd: we probably need to ask about recording the meetings on each call, for new members

Ilya: value in recording whether transcription is good or not

Resource Timing

should redirected navigations have a workerStart value if a previous redirect URL was intercepted

Yoav: workerStart only refers to last intercept, or it’s under defined as to which worker workerStart should refer to, if there are multiple workers involved. Also, workerStart probably needs to be gated by TAO.

Example: intercept first navigation from one servicer worker that forwards to another origin which has its own sw.

Ilya: first sw triggers 301?

Yoav: yes, that’s what Ben is saying

Ilya: i’ll have to look, but this should be covered, because we should restart processing on 301

Nicolás: agrees that it should/does reset

Yoav: Ben is saying that the tests don’t assert the right thing - not in sync. There are two issues: workerStart gated by TAO (is no-brainer) and validating that the tests are enforcing the spec.

Yoav: we agree that we should enforce current dfn of “last redirect”? Has thought it would be meaningful to accumulate workerStarts, but that’s probably outside of the current model.

Todd: redirects are handled by fetch spec, we hand wave that we record the “last one”...

Nicolás: redirects happen in step 20

Yoav: step 20 (20.1?) should also set workerStart to 0

Yoav: any volunteers to fix this one?

crickets

AI - Nicolás to file chrome bug that will fix the test that is incorrect

spec what requestStart, responseStart, and responseEnd should represent when service worker is involved

Yoav: there are many issues where the current spec is under specified, many of these timers require fetch integration, postponed to L3, perhaps this one should be moved to L3 as well? Objections?

Nic: let’s punt to L3

Ilya: for L2, we could specify requestStart and responseEnd (when renderer receives last byte). For L2, all things that sw does is opaque to us, and we can specify it as such

Todd: we scoped out what the sw does. fetchStart/workerStart nuances (for redirects) - need careful review.

AI - Yoav to untangle this one, defer some parts to L3, maybe create some new L2 blockers.

Ilya: for L2, treat sw as blackbox excepting workerStart, with no viz into what’s happening inside

Many of the tests are flaky

Yoav: this is a recent issue, wpt infra team is starting to look at test flakiness, including one of yours Nic.

Nic: i will take a look at initiatorType test flakiness

Yoav: i will look at the re-parenting one

Yoav: another flaky test in UT

Nicolás: I will take a look at the flaky UT test

Yoav: here are all the flaky tests 

Add a note regarding negative timestamps for navigation preload requests

Yoav: this is in PR, can someone PTAL? This will close an outstanding issue. We are down to 9 actual issues. 2 are _almost_ closed.

Yoav: Issue 87 is adding tests about first-byte measurements. Tests were added, merge was blocked by flakiness, Andrew is taking a look at that. I have many buffer-full-* tests that are also blocked on flakiness. In the next few weeks, we can close all of these L2 issues. There are a few around networking, but they are tough to test. We can change the spec easily, tough to test, best path forward is to file issues on WPT infra folks (120, 160, 123?). Can you test multiple sites with same cert? Plan is to fix spec, file issues for tests that aren’t possible. 123 might be testable.

User Timing

Mark-measure-return-null.html

Nicolás: We added a test to ensure that mark()/measure() do not return anything, because in L3 they do/will return the mark/measure. At present, it’s returning NULL - not undefined. So the test is wrong, will pass in Chrome - failing elsewhere (even though they match spec). I propose we drop the test in advance of L3

Tim: should we fix Chrome and then fix the tests?

Yoav: this is testing behavior that changes from L2 to L3?

Nicolás: that’s right, this won’t make sense for L3. IMP the idlharness could/should handle the return value being undefined.

Todd: the test should test L2 or L3, shouldn’t be deleted / changed.

Nicolás: test is not correct atm, does it make sense to keep it now in light of L3 coming down

Yoav: we should rename the test to make it clear that it’s for L2

Ilya: first patch for L3 will have to delete that test

Nicolás: this shouldn’t matter much - i don't think we should block L3 for this test

Ilya: the way the test was written, it’s for L2 support feature detection

Nicolás: that shouldn’t be necessary, that’s what idlharness should do

Ilya: from dev. perspective, how do I check L2 versus L3? Are there other ways?

Philip: there needs to be a proper way to detect it

Ilya: two things: rework test for better feature detection (also add how to do this in spec) AND we need to update current spec so it passes….

[this is about feature detection for L2 v L3]

Philip: maybe we can check the prototype of the entry, see what’s exposed on there. If performance.mark returns an object, and it has a prototype, you can check to see if “details” is there.

Yoav: can’t we re-write this test to feature detect and then test L2 OR L3 (based on what the feature detection tells you)?

Tim: we have clear action item

What’s blocking shipping? (WPT results)

Yoav: so what’s blocking shipping of UT from L2 to rec? 3 issues. There are 3 tests that are failing in chrome. First one is fixed, others are on their way to be fixed, right Nicolás?

Nicolás: not sure, those are recent tests….

Yoav: possible time gap between tests landing and impl’s landing

Nicolás: tests arrive immediately, but dev build doesn’t have impl bits

Yoav: who’s next closest impl? Probably firefox. It’s failing the mark/measure, but that’s a test issue. Failing another, but that’s a recent addition.

Markus: AI to open up bugs on firefox re supported types.

Yoav: and we need to fix the test to feature detect and test L2 or test L3, whatever’s there.

Yoav: once impl’s land, and …. We should be able to ship to rec. Other blockers?

Todd: supported types is not supported in any other browser, right? Which is part of the Perf Timeline spec.

Yoav: each spec is calling into perf timeline to register

Todd: I think these are L3 tests

Nicolás: unclear on how branching works, do we get to pick a commit to split L2 v L3?

Yoav/Tim: yes.

Todd: I suggest we move on with what we agreed about for L2 (which excludes supported types)

Tim: what do we need to do wrt spec for shipping L2

Nicolás: branch called “L2”, change … to “CR”

Todd: but we published L2 in July, it’s already in CR

Nicolás: so we need to point the branch to *that* commit

Yoav: not trivial to find branch point, but we can probably find it.

Nicolás: let’s use the last commit from … particular day

Todd: let’s also take minor editorial changes (until 10/17) - let’s use that as our L2. and now we’re on to L3!

Yoav: if testing canary, we are there except for mark/measure returning null/undefined

Navigation Timing

WPT results

Yoav: it looks like Firefox is second greenest, but there are still a few failures

Markus - AI to triage, file issues against these failures

Yoav: we aren’t blocked on those tests, but let’s file bugs and get to green

Todd: half of the failures are passing in nightly

Markus: should that be reflected in experimental?

Everyone: it should, but :shrug:

Markus: Regarding the supported entry type tests, should they only be for L3?

Yoav: we still need to fix them… per perf-timeline L2 spec (and L2 of NT), It’s L3 in UT, because L2 of UT shipped beforehand.