Meeting minutes
Recognized Entities Call Introduction
Manu_Sporny: All right. Hey everyone. welcome to the recognized entities call. we'll get started here just so a reminder that this call is recorded and transcribed with the transcription posted to the web for everyone to read. if you have any concerns about that or object to that please let us know. pause for a beat. All right. we do have an agenda today. and we're going to go over just general introductions, any news announcements that folks want to make.
Manu_Sporny: And then after that we will take a look at our freshly published first public working draft. Today was the day so we'll take a quick pause to celebrate that. and then move into pull request issue processing and with a focus I think she on the digest SRRI thing. I know you provided some feedback so today would be a good day to kind of discuss that. and then the rest of the time, unfortunately we don't have Steve Capel here. and he apologized, for not being able to be make it and then his use cases isn't in yet.
Manu_Sporny: But since you Phil weren't here last week, it would be good to hear from you about alignment with the digital identity anchor stuff, the GS1 stuff, potentially how this specification works with the multi- layered GS1 approach. I thought today would be a good day to kind of go through that because we want to make sure that the work we're doing here is compatible with what GS1 is doing and is compatible with the stuff that UNP and Steve's groups are doing. So, we're just information gathering and making sure that we know what it would take for alignment or…
Phil_Archer: Okay. Just wait.
Manu_Sporny: if we already have an alignment. So, that's the proposed agenda for today. are there any updates or changes to the agenda or anything else anyone would like to discuss today? Okay, let's jump into the first item then. any news, updates, introductions, reintroductions, or anything else folks would like to share in general with the group?
Manu_Sporny: Please, go ahead.
Ted_Thibodeau_Jr: This is very lightweight just so people are aware I will be offline for the next two sessions of this group.
Ted_Thibodeau_Jr: So anything that's dependent on me is unfortunately get pushed back.
Manu_Sporny: All right, thanks for the heads up, Ted. Go ahead, Phil.
Phil_Archer: Just very briefly, man, this doesn't apply to anybody on this call. pretty much my definition, but just to let you know, Brent and I are very well aware that a lot of people have joined the group as invited experts, and we're having a look at them and saying, Do you really plan to do some stuff? And finding polite ways to say, you said you want to be an invited expert, now's a good time to actually do something. so that process is oning. Again, for emphasis, that doesn't apply to a single person on this call, but just to let you know that there are a lot of people in the group who we're not quite sure why they're there.
Manu_Sporny: plus one to that. as Dave said in chat, experts got to expert. plus one. Thank you very much to both you and Brent for doing that. It is, somewhat of a thankless job. So, thank you, for doing that. it's important to keep people to their word when they volunteer to be an invited expert and help with that any other items before we jump in? All right. First, item is we have a specification published as a first public working draft. today was the day it was supposed to be it was published today.
Specification First Public Working Draft Published
Manu_Sporny: Thank you very much for the W3C team for getting that done. it uses the new name. hopefully we don't ever have to rename it. and it has all the stuff that you would expect to have in it, so there's that item. any questions, concerns about any of that? I wouldn't expect that Our next step is to try to get things kind of worked up in the specification to the point that we call for a broad horizontal review. In order to do that, we have to create a threat model and document it.
Manu_Sporny: And then we will have to reach out to the horizontal review groups including internationalization accessibility the tag security and privacy and I might be forgetting one in there but there's a process to this and we need to get to that point ideally as quickly as we can. The biggest work item that we have to do in there is which means that some of the upcoming calls are going to be dedicated to threat modeling work. largely we're going to try and crowdsource the threats that people feel are most important when it comes to the recognized entities specification. How can people use this technology to potentially mount attacks against issures or…
Manu_Sporny: verifiers? Phil, please go ahead.
Phil_Archer: We haven't set the agenda for the face toface meeting yet,…
Phil_Archer: but already threat modeling is a big chunk of that meeting. So I think it's really good that you're bringing it up now. I know it has been discussed before. So face to face is only what three weeks away something like that. we do plan to spend quite a chunk of time on that. I'm also aware that several people on this call will be joining remotely from the US. if that is a particular topic therefore that we should discuss we also have Shagaya. So if you can find the magic time when we should be discussing it in Brussels so that shea and Dimmitri and Ted can all join the call remotely.
Phil_Archer: Please let me know what time that is and we'll put it on the agenda for that slot.
Shigeya_S: for the face to face I will attend in person. So
Phil_Archer: Of course You're going to be there in person, aren't you? Great. Yes, of course you're going. Thank you. I guess my question to you then Manu is do we need to schedule the discussion of threat modeling at a US acceptable time?
Manu_Sporny: That's a good question. I don't know what do people prefer here? we get some initial pass at a threat model done before wow that's only two weeks away isn't it before our face toface meeting and then we've got I think one two three four people that are probably maybe five yeah I think we need US friendly time bill… if we're going to talk about threat model. Manu Sporny:
Phil_Archer: Okay. Hi,…
Phil_Archer: Brent. …
Phil_Archer: afternoon time for that. All right.
Ted_Thibodeau_Jr: I'm sorry.
Ted_Thibodeau_Jr: Refresh me on the dates. Okay.
Phil_Archer: It's the 4th of June. So, the Thursday, 4th, that's going to be European morning. So, that won't be relevant to you, Ted, I guess, unless you like to get up in the middle of the night. so it's going to be either going to be the European afternoon, your morning of Tuesday which I guess unless Brent contradicts me is most likely because on the Wednesday afternoon we're going to finish early to go on the social activity.
Phil_Archer: So I think Brent that means that unless you say otherwise we're going to schedule threat modeling for the Tuesday afternoon Europe time that morning US.
Ted_Thibodeau_Jr: Europe is in so many time zones. What are we looking at?
Phil_Archer: The European…
Ted_Thibodeau_Jr: Okay,…
Phil_Archer: what we call the afternoon which will roughly translate to the morning. Of course there are multiple time zones. I mean, basically 1 or 2 p.m. Central European time, deduct 6, that's Eastern time.
Ted_Thibodeau_Jr: I'm likely to be able to handle those three days. Thanks,
Phil_Archer: We hope you can join us for at least some of the time, Ted. Yeah.
Manu_Sporny: All right.
Manu_Sporny: So I think hopefully that gave you the answer you needed.
Phil_Archer: Thank you. Yeah.
Pull Request Processing
Manu_Sporny: All right. then let's go ahead and move on to pull requests. we do have the digest SRRI thing that we will talk about as an issue. so real quick, Avon's got two things that he raised around link checker fixes and then maybe some spacing issues. this I think is an editorial fix. It's just a broken link. and then the workflow script for Akidna to autopublish is editorial as well. Now that the FPWD is out, we can merge this and then every new update we make to the spec, we'll use KIDNA to autopublish the specification.
Manu_Sporny: So Avon's just being wonderful as always and making sure that all the machinery is set up so that we can just start autopublishing this stuff. Those are the only open PRs this week. Any other questions, concerns about that? If not, let's get into this digest SRRI topic. And for that, let me go ahead and topic this item.
Digest SRRI Discussion
Manu_Sporny: So you raised an issue that the spec isn't saying anything about digest SRRI and you raised a PR to address that by adding a digest SRRI example to the specification. Not just an example but an addition to the data model. I pushed back a bit on do we really need this and then raised a tracking issue on the core data model and in the core data model let's see we stated some very light support do this to remove it or sorry to deprecate it just to be clear we're not removing it we're deprecating it that would be the furthest we
Manu_Sporny: in this iteration of the BC data model 21 spec. for reasons Brent reached out to Ory and Mike to see if they have any concerns here. We have back a response from them yet. And then Shaga you noted that you've got some concerns. So let's start out with that Shea with those concerns if you don't mind sharing them with the group.
Shigeya_S: So I haven't let your this comments yet but first question is whether we want to discuss why this is a BC data model thing this is a BC recognition call. So you prefer to discuss this context in a BC data model. I can provide my context of course here since it's related to the BC recognition call so firstly the provide the context is it okay for you manu okay so as you might know that I'm belong to K University…
Manu_Sporny: Yes, please. Yep.
Shigeya_S: but also I'm part of the orientator profile CIP and since
Shigeya_S: WC membership only allows me to attend as a KO since so one organization so I picked KO as my organization of the membership so at this moment I'm mostly wearing the KO's hat but to time I might need to wear original profiles hat but for technical reasons I attending this working group as chaos membership. So that said original profile has a similar concept to the discussion in the Bish recognition and my first thing I wanted to hear is whether the use case overlap exist between the BC recognition and the original profile.
Shigeya_S: But unfortunately we immediately dive into the tech technical side I mean the specs. So my context here in the originator profile is that they start developing a code. at least at around the 2024 and at the time they recognized that we are working on protecting part of the HTML representation using a bush data model or whatever credential format I mean signing signed object and that is what we want to do.
Shigeya_S: So for example, one of the way is using a CSS selector to show the part of the HM do model to sign as from the originator's point of view. So that is one of the use case we are working on and naturally the target of the signing is extend to any of the HTML contents or resources.
Shigeya_S: So at the time 2024 we thought that it is good idea to follow WCC specification to work to implement that. So that is the reason why we were using SRI at that that moment since it is a le and also available. So it is natural for us to rely on that. So later we found out that finally the bushi DM DM support the digest SRRI. So it was natural for us to use that spec to be included as a part of the code.
Shigeya_S: So that is the context and currently we have multiple codes already relying on that property but fortunately or unfortunately we are still in the phase of development also the part of the code is still able to modify I think we think and that is the part we are thinking whether we can modify our code to adjust to the multibbase style model or not.
Shigeya_S: So the developers answer it seems to be possible but whether we want to pass that way or not besides our cost to work on this is how these gaps between the WCC webspec and the WCC VCDM gaps since our code is living between the web context and also in the BC context. So I think this discussion should be made in the BC data model working group so the pull request itself is I think it depends on the upstream.
Shigeya_S: So if the upstream still allows us to use the digest SLI, then I think it is natural to have bushy recognition to have that property for now at least whether that is what because I think that there are overlap between our work and bushy recognition work. So I want to align with it align this work if possible. but I do not know whether there actually will be overlap or not. So it's up to the situation. So anyway I understand what and man's concern as the BC data model ecosystems point of view.
Shigeya_S: It is natural to eliminate such kind of the multiple choices. But since we are working in WC and the OPCIP is referring to the webspec which is leg of the WCC. It is, I honestly have a cons question that how we want to align these two ecosystem. I assume that the WCC is a single ecosystem but it look like there seems to be little diversion here. So I want to hear opinion with regards to that. So that's what I want to say at this moment.
Manu_Sporny: Thank you shean for that kind of explanation of the background and the concerns that you across multiple projects. Dmitri, you're next on the queue.
Dmitri_Zagidulin: Can you say a few words on why we're considering deprecating digest seems pretty core to that cyber credential data model and security
Manu_Sporny: It's deprecating two ways of doing it right there. Currently the core data model spec says you can use digest multibbase or you can do use digest SRRI and if I remember correctly we landed there because we couldn't come to consensus on one or the other. what has happened since then is we are now having the discussion all over again in a number of the leaf specifications the extension specifications.
Manu_Sporny: So bitstring status list and in the VC recognized entities and so we are repeating that discussion and it's eating up call time across a variety of these different things and when we look at the technical implementation delta between the two for digest SRRI for implementing what we have we have in digest SRRI for converting or implement
Manu_Sporny: ing that as digest multibase we're talking about two to three lines of code right so that is the ask of developers to if you want to convert over to digest multibbase it's literally two to three lines of code I reconfirmed that this past weekend so it is an attempt to try and unify it down to one thing right I mean we for a variety of reasons were not able to do that in the
Manu_Sporny: But now we're kind of paying for it in every single spec because we have to have the debate all over again. okay so that's…
Dmitri_Zagidulin: Got it. Understood.
Manu_Sporny: why we're talking about it now because it came up again now we're like okay do we support digest multibase or digest srri or both in the recognized entities spec and we've eaten up a couple hours and telecon time discussing it. hopefully did that answer your question Demetri.
Manu_Sporny: I'm like why Okay.
Dmitri_Zagidulin: It does.
Dmitri_Zagidulin: Yes. Thank you.
Manu_Sporny: and then agree with this is a decision probably to be made upstream in the VC data model specification but because you're here today and you had feedback we really wanted to understand that feedback and get it down on the record to kind of note some of the reasoning behind removing it and this will hopefully answer
Manu_Sporny: questions that you answered so SRRI is a web specification it is defined by the W3C and it is in a recommendation digest multibase is also published by the W3C and defined as a recommendation so they're both W3C recommendations does W3 publish spec specification, multiple ways to do things. Yes. the most extreme example was XHTML versus HTML 5. for a decade there was, stuff happening there. same thing around DC API, MDOC support, SD Jot support, W3C, VC support.
Manu_Sporny: So W3C is no stranger for publishing things that are multiple ways to effectively accomplish the same task. so I think that's normal for better or worse. I think, you said, that the new SRRI work is adding integrity policy headers and other things like that. That's another example of Digest SRRI is now doing things that we didn't necessarily intend to be a part of what we're doing with the digest stuff in the VC data model.
Manu_Sporny: So the purpose of the digest mechanism in the VC data model is to create some kind of cryptographic hash over a remote resource so you can digitally sign over that remote resource so you can then depend on that remote resource when you're processing a verifiable credential. And so digest SRRI indigest multibbase and really any other digest mechanism would accomplish that and the challenge right now is that we have multiple ways to do the same thing and the technical differences on implement of migrating from one to the other is easier from multibbase to digest SRRI.
Manu_Sporny: And the reason for that is digest multibase has an extensibility mechanism. You can support more than just the NIST core shaw 2 hashes with digest multibase whereas digest SRRI is very much focused on NIST shaw 256 384 and 512. I think the newer versions are considering Shaw 3 but no Poseidon has any other type of hash mechanism. no variable length hashes none of that stuff.
Manu_Sporny: whereas digest multibase does support it. So it has an extensibility mechanism and has a broader support for different types of hashes. and that's one of the reasons to kind of suggest multibase over digest srri. I would imagine Shaga on the originator profile stuff that from a development standpoint it would not be a very heavy lift for your developers and again it's three lines of code max if we're talking like JavaScript level Python level type of programming and and therefore it should be a fairly easy
Manu_Sporny: switch over. So, let me stop there. if there are any other kind of thoughts or concerns or other things we should be considering, in the discussion. I should also mention one last thing. We're not removing it from the specification. So, we are deprecating it and telling people we are probably going to remove it in the next version. We're not removing it right now. So even when we publish BC data model version 21, it will continue to support digest SRRI. It's just, the next version 22 or version 30 may remove it. even if we remove it, we're not going to remove it from the vocabulary. We're going to say it's deprecated, but people can pull it into their own originator profile could pull it into their JSON LD context and then continue to use it.
Manu_Sporny: through there beyond version 2.2 30. let me stop there. Go ahead.
Shigeya_S: Yeah. two things. I honestly prefer multibase in a BC data model context of course as we discussed and actually I have checked why the intention of the use of the digest SRRI in a developers and of course they heavily on the side of the web developing side. So even between my developer and me has some sort of the difference different mindset. So I think how the people select depending on the current BC data model 2 2.0 we do not know right. So I think
Shigeya_S: It the signed object is already on for example the site so it is partially deployed so for us it is a breaking change so I think we can't ignore this situation I think so I think
Shigeya_S: My question is whether this is a good idea to proceed in this context and also that is from the use case for use case or developers point of view and the other thing is that I understand that SLI is not supporting not updating the specs as with regard to the crypto agility. I understand and I know that. So I understand that the multibase has a better specs at this moment.
Shigeya_S: But whether these two specs can be updated is within the context of the hashing I think it is same from the extensibility point of view. So it can include future hashes for example I mean the SRI can include a future version of the hashes. So I think it is not convincing for me to discuss as around the extensibility.
Shigeya_S: So I understand what they said but I think I prefer mult in multiple context and if we are discussing this in IT for example or other SDOS's it's okay but we are discussing within the WCC that is what I want to say that's it from
Manu_Sporny: All right. thank you, son. any other thoughts on this issue? All right. go ahead Demetri.
Dmitri_Zagidulin: I wanted to say that the argument which I agree with that it's only a couple lines of code to convert between the two goes in the other direction. If we end up supporting both, it's trivial to write functions that support both, right? Because the conversion is so easy. That's it.
Manu_Sporny: How do you support Poseidon in digest SRRI?
Dmitri_Zagidulin: It will be the use cases it can do digest the use cases that require Poseidon will do multibase…
Manu_Sporny: Right. Dave,…
Dmitri_Zagidulin: but my point is on the validation side it should be able to support both so there's not much need to deprecate
Manu_Sporny: you had your hand up temporarily.
Manu_Sporny: I don't know if that was a
Dave Longley: Yeah,…
Dave Longley: I was also going to make the point that updating the SRI spec requires two interoperable browser implementations which is a very different bar from two different interoperable VC implementations. And so,
Manu_Sporny: Yeah, plus one to that. the digest SR is never going to support Poseidon. They've already mentioned that, the hash functions that they're interested in supporting are just NIST curves and that's it, I mean the other way to kind of go at this is to ask the developers what can you absolutely not support me meaning are there use cases that you currently have that if you had to switch to digest multibase you would not be able to do and I think the answer to that across the board is there is no use case that you can't support by switching over
Manu_Sporny: Mean I developers have their things that they really like to use but here's an example where we could easily go down to one and we're instead kind of throwing our hands up and going but they get to choose. It's like no sorry just pick one like everyone's use case will be supported with one of these and the other one doesn't support all the use cases. and so I think we really need to ask the developers what can you not do with digest multibase right because right now it's largely a preference signaling thing that's going on here and it's not a good technical argument right we need technical arguments here what is going to break in your ecosystem if we deprecate digest SRRI and the answer to that is nothing if we deprecate it nothing breaks
Manu_Sporny: You can continue doing what you want and if we remove it from the spec, if it's so important for you, you can take digest SRI and you can put it back into your JSON LD context and you can continue to use it within their ecosystem. So nothing breaks there either, right? So I'm going to start pushing back a little harder on this. really sounds like this is developer preference, pushing on something and making us have two options where there's just no reason to have two options. no technical reason certainly. Please.
Shigeya_S: I do not want to pushing hard hard but I think we obsering two type of developers your developers thinks that it is preferable and my developers actually prefer SL at this moment but they have willingness to change of course but how we can measure which is popular and of course amount of multity at this moment is more popular I think but for example if we want to duplicate this what is the criteria we can duplicate it is my
Manu_Sporny: I mean the criteria we're using right now is asking who's implemented it and is currently using it in production and we're not getting anyone saying that they're using it in production. you've said you've implemented We know that trade verified I believe implemented it but I don't know if they're interested and for the people that have implemented it are you really stuck on it? can you absolutely not move over to Digest Multibbase if we give you two years of heads up to move? go ahead Demetri.
Dmitri_Zagidulin: By that same logic, again given that the conversion is trivial, I think we spent more phone time than supporting both will take all of the developers combined. Let's just leave it. I mean, what do we gain by forcing it down to just one option aside from very minor conceptual elegance?
Dave Longley: If you want real interoperability, we're going to have to say it's a must implement both everywhere.
Manu_Sporny: Yeah, plus one of that. Demetri also I don't know if you've seen all the bugs that have been raised on digest SRRI misenccoding and the amount of work that it's caused those of us that are actually implementing these libraries and have to talk to developers around what's the difference and digest SRRI and you need to put the Shaw 256 header on there and no it doesn't compress down into binary like those of us that are having these I mean we get direct emails from these people asking about bugs and how to implement
Manu_Sporny: that it's definitely eating a lot of my time having to support this stuff. And it's kind of like I've got better things to do with my time as an example, you might not be exposed to this, but the rest of us that are building the tooling and…
Dmitri_Zagidulin: Got it.
Manu_Sporny: trying to do the interop work and trying to talking with customers about what they should implement, SRRI or multibase and spending days, on just that and vendors getting their hackles up over it. it's a huge waste of time, right? But it…
Dmitri_Zagidulin: Got
Manu_Sporny: but we forced into those situations because the spec didn't make a choice, go ahead, she Yeah,…
Shigeya_S: I understand that. But it's already a leg, right?
Manu_Sporny: but XHTML is a wreck. there are tons of published RFC's and other specifications that are recommendations that people are just not using or…
Manu_Sporny: have moved away from or just don't implement because it doesn't meet their needs.
Shigeya_S: Yeah, I…
Manu_Sporny: I don't think thats just because something exists out there as a standard or wreck doesn't mean that it's useful, Sorry. Go ahead, Chica. Yeah.
Shigeya_S: what I want to say is please do not take this lightly.
Manu_Sporny: And just to be clear, I mean, I don't think we're taking it lightly. We published it. We're marking a deprecation header for 18, months. We're, getting feedback, from you and from other folks. I like to hope that we're doing as much as we can to get feedback before we make a decision. So I hope you feel I heard but there counterarguments against a number of things that you're raising.
Manu_Sporny: But yeah, plus one. I hope we're not coming across as taking it lightly. I think unless there's any more discussion on this, I think we can move on. and maybe the decision has to be made on it one way or the other in the core group and then that'll potentially trickle down to all the other specifications. All going back to the list of issues, that was the digest SRRI one. we have a couple of other ones here. Apologies, not a lot of time left, Phil, but maybe we could talk about the use case around recognized entities
Manu_Sporny: and use in GS1. If you want to just kind of kick that conversation off, whatever we don't get to today, we can get to on the call next week.
Recognized Entities and GS1 Use Case
Phil_Archer: I can state it fairly clearly.
Phil_Archer: Kevin's on the call. Kevin is very much a part of this conversation. so the reason I put it in for a couple of reasons.
Phil_Archer: It's very unfashionable I know but if I refer you back to the use case document that was published way back and Kevin supplied the GS1 use case which for emphasis yes that's where I come from but this applies to any identifier that works in the way that URLs work that DOIs work that other things work where you are given
Phil_Archer: a block of numbers, a block of something or other and then you add to it to create your individual identifier I shouldn't use the word key it's a GS1 term but essentially so in the case that we have is that GS1 global office issues a credential for example GS1 Canada and says you can create numbers that begin 754 and then GS1 Canada credentials to its member companies that say you can issue credentials. you can issue numbers beginning 754 1 2 3 4 5 and they do that and then they issue a number that says 754 1 2 3 4 5 6 7 8 9 10. this is not an uncommon pattern.
Phil_Archer: When we come to the recognized entity work also bearing in mind the digital identity work at UNP which for those who may not be familiar with it is there what the current thinking as I understand it in this task force specification it doesn't have the concept of I'm giving you the beginning of something that you can then extend
Phil_Archer: end rather it says I am an accredititation service like NAT in Australia that says your conformance testing laboratory are hereby empowered to test for conformance for organic status or free status whatever it may be so there is a hierarchy there and that hierarchy is expressed with a digital identity anchor as the ultimate route But in the case of identifiers that work in the way that GS1 works or anything else in the way that we work, yes, we're giving those credentials that go from global office to for example GS1 Canada. Yes, they kind of say you can issue your own identifiers now but within constraints and the constraint is you begin with this sequence of characters that we've given you.
Phil_Archer: Do we absolutely need that to be included as a use case in the recognized entity work? I'm not sure. we have that data model anyway. What's currently published is an advance on some work that Kevin did some years ago. but nevertheless in order to be in the group is doing which is okay you give me this credential but who the hell are you to say that I can have this credential? We do kind of have this anchor but it's a bit of a dodgy not very helpful reference that we can give.
Phil_Archer: So my question to the group which no we're not going to discuss in detail is does anybody else have the appetite to think in terms of a recognized entity credential that says whatever you issue must have these features. In our case it must begin with the sequence of characters that we've given you. It may well be that in reality to get that through the rec track GS1's the only one in which case that's not enough. We're not do it and I don't want to hold everything up just for the case of one organization. That would not be right and not be fair. If not I suppose I wonder what else this group might have a view on how you would say this credential that I'm giving you has currency because I've got this other one over here that I'm basing it on.
Phil_Archer: and how we would do that in our case. Don't know if I've explained that properly. There we are.
Manu_Sporny: Yes, thank that was very useful and a number of us are cheating on here because we've seen the GS1 use case in this thing over many many years. So we're very familiar with it. and yes it's a very useful thing. So let me talk about the worst case and move to better cases from there. The worst case is GS1 just says this is the way we do it and it's done that way. you're a very large organization and you have the ability to kind of suggest that this is how things should work in the GS1 ecosystem. So that's the worst case is like you still are able to do this harm no foul that sort of thing. Manu Sporny:
Phil_Archer: Yeah. Yeah,…
Phil_Archer: we can. Yeah. Eat this.
Manu_Sporny: So can we get better than that? I do think that there is a general pattern here, across the VC ecosystem where you have one credential and you kind of need another credential for that credential that you're holding to mean anything and then maybe that one has a requirement upstream and so on and so forth. So this and I'm just using this as the kind of the hierarchy here high level next level leaf credential is a very common pattern across many industries x509 being just DNS certificates being one kind of example like that right like tldds and…
Phil_Archer: Yeah. Yeah.
Manu_Sporny: you've got your root registar and then you've got your DNS TLD providers And then you've got your leaf certificates. Same exact setup. if you are, a person that is trying to prove eligibility to work, you need an identification card of some kind, you need your work eligibility permit that line where the names kind of and identifiers match up and then maybe you have a specific leaf thing. So this happens all over the place all the time. I think it's a very common pattern. and so the question in my mind is, does the recognized entities spec help in any way here? and the short answer, I think, is yes. but I'm going to put myself back on the queue so Kevin can add a few things.
Kevin_Dean: Yeah, one of the other examples I use for this is the idea of a diploma. we have the use case of a diploma. it's one of the earliest examples that we've had in VCs, but in order to recognize a diploma, you have to recognize the university that issues it. And that really gives you two options. One is you can have a list of every single university, the DID for every university in some kind of trusted root file so that you can refer to it and say any diploma issued by any of these DIDs is valid. or you have a hierarchical set of VCs, one of which is issued by some governing body, maybe a federal department of education, which says
Kevin_Dean: says we recognize these institutions as universities and all you need is DID for that trusted route and you need and any presentation should have either two VCs or a path to two VCs one being the diploma itself and the other being the accredititation credential for the university that issued the diploma and that accredititation credential has to come from the trusted route namely the department of education that you trust. the model is similar for the GS1 system. There's a trusted route called GS1 global office. Using the GS1 model, you don't need to know each and every GS1 member organization around the world. You just need to know global office and have a chain of credentials from the one that's being presented to you leading all the way up to the root of GS1 global office.
Manu_Sporny: Plus all that. Demetri Europe.
Hierarchical Credential Patterns
Dmitri_Zagidulin: I wanted to say plus one. This is a commonly repeating pattern from issuer registries and education to open ID federation. Obviously you mentioned X509. So I think it might be useful for us to define a pattern like that whether it's a pe verify a credential chain or something like that. we can discuss but I do think it'll be useful.
Dmitri_Zagidulin: And two, we've got two patterns here. One is the hierarchy one, and the other one is the namespace pattern, which is the identify prefix that Bill was mentioning. And that's it.
Manu_Sporny: Let's go on to that. Phil, you're
Phillip Long: Yeah. I was just going to qualify that at least in the US there is no federal legislation of entities that are education institutions.
Phillip Long: The only ones that would be there are the ones that are getting federal funding for something particular. it's the highest level is the accreditation body for a region in our context and then in each of those sets has metadata around that credential which is underscoring the information that the individual can use to determine their willingness to trust and the risk associated with trusting that particular credential. So I think that vertical pattern I'm agreeing with that the way that you described it working but I'm also emphasizing that at each level there's a set of metadata there that is useful
Manu_Sporny: Yeah. Plus one the
Kevin_Dean: Yeah, and I agree. And every one of these hierarchies will have a definfined set of rules on who the root is, what they do,…
Kevin_Dean: what the scope of their work is. And you need to understand those rules in order to do the validation.
Phillip Long: Right. the quality assurance practices,…
Phillip Long: things like
Manu_Sporny: Plus one to that. So it sounds like we are in strong agreement that this is a very useful use case and a pattern that pops up in multiple market verticals. I guess so the general question is how could we generalize this? the recognized entity spec could do it. so Phil Archer Kevin let me know where this current proposal falls down.
Manu_Sporny: So if we were to use the recognized entities stuff to do this, then GS1 global office would issue a list of recognized entities for all of the regions,…
Kevin_Dean: What the f***?
Manu_Sporny: the different regions potentially. would that be I'm just talking at the first level,…
Phil_Archer: Mhm. …
Manu_Sporny: not the other levels, but that to me feels like the top list that would be published and in it GS1 global office would say, these are all of the I forget what you call them
Phil_Archer: Yeah, I get what you mean. Also cons we are planning no to actually issue an individual credential to each and every one of what we call our member organizations. so J1 Utopia is a member of JS1 global office. So we actually issue because actually one member organization can have more than one prefix.
Phil_Archer: GS1 US has loads so it isn't a onetoone mapping but there is something that comes from global officer says yep you can do this the important point of the pattern is and…
Phil_Archer: As we've discussed this happens in multiple cases you can do this but you must follow this pattern that's the How?
Manu_Sporny: Yes. Yes.
Manu_Sporny: And in the recognized entity specification, there is a schema for the credential that's issued where you can embed that pattern into the schema. So in creating the recognized entity list you could list here are all the recognized entities all the GS1 utopia and when GS1 utopia issues a credential of this type they must use this prefix these things need to exist in the VC so I think there is a way to do this with the recognized entity credential and…
Kevin_Dean: Okay.
Phil_Archer: Okay. All right.
Manu_Sporny: I think that holds for every level that we have here. but we would have to see if that is actually true in models all the way down to the leaf. I don't know…
Phil_Archer: I will add it to my list of to-dos.
Manu_Sporny: if that's quite true in my expectation is we do it together as a group to see…
Phil_Archer: Okay. Yeah, good.
Manu_Sporny: if this fits that pattern and if it does then that's kind of good news. It doesn't mean GS1 has to use it or shift to it or anything. you can keep doing what you're doing right now. but it's possible. The only thing I'm unsure of is if there are some fairly complicated business rules that would need to be executed that you cannot easily put into a JSON schema at that…
Manu_Sporny: if that's the case then it kind of falls apart. So if you can't
Phil_Archer: No, it should be pretty straightforward and…
Phil_Archer: I certainly would like it to be a generic case because my ambition is that every scanner in the world will do this and will have the software on board to do this. certainly the scanners that have a smartphone strapped to their back or whatever, the big ones that get used in the bag of stores and the customs and whatever. I want them to be able to verify complete credential if they read it in a barcode or if it's a GS1 they can go off and resolve it, get back to BC and do the whole verification thing all in one place. And we have the ability to get that into every scanner on the planet. That's the aim.
Manu_Sporny: Beautiful. I think we would all love to see that. with that, that's a great closing statement. thank you, Phil. we're out of time for, this week. We'll come back again next week. get some more thoughts from, Steve Capel. maybe Phil any further thoughts on the GS1 use case? and then we will try to move into threat modeling next week. give it a shot. if we can crowdsource some threats from this group. All right. thank you everyone very much for the conversation today. Thank you very much shean for the input on the digest SRRI stuff.
Phil_Archer: Thanks all night.
Manu_Sporny: and we will meet again next week and continue the discussion there. Thanks all. Bye. Meeting ended after 00:59:11 👋 This editable transcript was computer generated and might contain errors. People can also change the text after it was created.