Episode 05

Headshot of Brian Greene Headshot of John Hoyt

with Brian Greene and John Hoyt of Nationwide

This week we talk with Brian Greene and John Hoyt, both both members of the user experience team at Nationwide Insurance.

We discuss the challenges of working with such a big team, how they promote performance across departments, managing user expectations and improving perception of speed by designing feedback mechanisms, and more.

Direct Download

Transcript:

Tim Kadlec:

Welcome. You’re listening to episode five of the Path to Performance, a podcast for everyone dedicated to making the web faster. I’m Tim Kadlec.

Katie Kovalcin:

And I’m Katie Kovalcin, and we are actually sort of live—I guess not live by the time this airs—but we are live, in-person, together at Velocity Conference, which is exciting, because it’s an entire conference dedicated to performance.

Tim:

Yes, which has been fun. And yeah, we are in a room that we found that was unlocked, that is vacant, and apparently semi-quiet.

Katie:

[laughs] Yeah. We’ve spent the past two, three days now, listening to lots of really great talks and getting into the nitty gritties on performance as of late.

Tim:

That’s true. Yeah, I mean that’s the thing with Velocity is there’s usually a few that are a little bit more general, but a lot of the talks get very intense and go kind of down the rabbit hole on a certain topic, so. It’s been fun.

Katie:

Yeah, yeah. It was pretty intimidating to give a presentation, as a designer, to a room full of engineers that are really focused on this stuff. But met a lot of really great people and it’s just really fun to be around all these people that really think about this day in and day out. So, it’s been good.

Tim:

You did awesome. You don’t have anything to worry about.

Katie:

Thanks.

Tim:

But yeah, that’s my favorite part about it, is just being sucked into continual performance conversations the entire time. Yeah, it’s been good, it’s been nice. It’s been a little cooler than I would have expected from California.

Katie:

Oh yeah, it’s like really cold here. I don’t know how you San Franciscans—

Tim:

Santa Clarans.

Katie:

—Yeah, Bay area folks, deal with this. This is not summer to me.

Tim:

I’ve been told that it’s usually like—I mean, it’s upper 70s. So, I think that honestly the Californians are being kind of wimpy. It’s warm-ish.

Katie:

“And that’s when the podcast turned to the weather…” [laughs]

Tim:

[laughs] “Yes, you’re listening to the Path to Weather…” I don’t know, that’d be a horrible name. Anyway, the conference has been fun, some good talks. I enjoyed the font one from Zach Leatherman, it was good. Steve Souders had a really nice design and performance one, which fit very nicely with what we do on the podcast. You should totally check out the video when that comes out. Or actually, I think he gave it at FluentConf. I think there’s a video out already. It’s a really good talk, and it is perfectly aligned with this sort of thing.

Katie:

Yeah, definitely worth checking out. Today’s really fun. We have a really different interview than we typically have had in that we hear from Nationwide, which is a much larger organization than we usually talk to, and also they’re just in the early stages of trying to figure out what that culture looks like. So, I think that they have a lot of really interesting stories to share about the more foundational aspects of really what it’s like to get started with thinking about performance.

Tim:

Yeah, I think it’s definitely a different perspective than we’ve had on before, partly because of, like you said, the scope, and partly because, yeah, they’re new to it, so they’re kind of fleshing it out as they go, which is really interesting to hear their perspective. I think we even probably are going to have to follow up with them a little bit after.

Katie:

Yeah, absolutely. So, let’s go talk to them!

Tim:

Sounds good.

And now we’re excited to have Brian Greene and John Hoyt from Nationwide Insurance join us on the podcast. Hey guys.

John Hoyt:

Hey, how are you?

Brian Greene:

Hello.

Tim:

Pretty good, can’t complain. Things are going well at Nationwide?

Brian:

Absolutely, absolutely.

Tim:

Can you give us a little bit of background on Nationwide, as well as what each of your roles are there?

Brian:

Yeah, sure. So, Nationwide is largely known primarily as an insurance company. But we do a lot more than that, including financial services and banking; Fortune 100 company located in Columbus, Ohio. Myself, my title is creative technologist on the user experience team, and our team is comprised of about a little less than 70 people currently, but we’re growing and it’s an exciting time to be in user experience inside of a large organization like this.

John:

Yep, and I’m a web user experience designer and sometimes front-end developer.

Tim:

“Sometimes front-end developer”? [laughs] So, you sit on two different teams, or…?

John:

It’s the same team, it’s just role-specific depending on what needs to be done at the time, so.

Tim:

Nice. So, are you on separate teams or are you on the same team, the two of you?

John:

Yeah, we’re on the same team.

Tim:

And you said about 70 people?

John:

Yeah, so our team has kind of a holistic user experience approach: content strategists, researchers, creative technologists, our front-end developers. Plus, we have visual designers and interactive designers, information architects.

Katie:

That’s a big team.

Tim:

Yes, it is.

Katie:

Very thorough.

Tim:

That’s actually one of the things that I think is most interesting to me and I think would be, just to get a little bit more of a perspective here, is the scale at which Nationwide is operating. You have 70 people on this team, and I imagine you have numerous different teams and people involved in touching different components of the website, correct?

Brian:

Oh yeah, absolutely. Nationwide, as a whole, has over 30,000 people. I think our IT organization might be north of 3,000 I believe, so a very large IT organization. And I think that when you realize the scale—I’ve been here almost five years and I’m still absorbing the scale of what Nationwide operates under.

So, you have Nationwide.com of course, which is just primarily our marketing side. But then I would say no less than another 40 or 50 different websites that we manage minimum. So, if you imagine you’re an insurance agent selling our product, then there are dozens of applications that you might need for personal insurance, commercial insurance, all those types of life insurance. Each of those have separate applications that need to be maintained, and so that’s why our user experience team is so large, is because we are maintaining just dozens of dozens of different applications across the enterprise. So, some publicly visible and some not. I think probably about 8% of what we do may never even be seen by the public because it is just meant for internal, and insurance agents, and financial advisors.

Tim:

Yeah, that kind of scale—that’s staggering. You said 3,000 people. That’s about 2,999 more than the official Tim Kadlec, LLC.

John:

[laughs] You’ll get there one day.

Katie:

Yeah, that’s a really big team.

John:

Yeah, it’s huge. I joined the team a year and a half ago, and I think we had about 45 people. So, within a year and a half we’ve added another 25, and by the end of the year we’ll be upwards of 70 to 80.

Katie:

So, what’s the culture like? It sounds like you have some pretty thorough, specific roles that are doing really cool things. What are some of the things that you’re thinking about? And as this team is growing, how is everyone getting onboarded with all of those many, many sites and things?

John:

That is a great question. Honestly, the team has been around in this existence for only a couple of years now. Think of us as an internal agency for Nationwide. So, a lot of our business partners just didn’t know about our department, or about user experience as a whole, really until the past few years. So, we’ve gone through quite a bit of a cultural change in terms of how we engage with them. Before, they might have engaged directly with IT about a project, but now they’re realizing that they need to engage with this user experience team. And then we’ll reach out and engage with IT throughout the process. I think we’re still figuring it out, to be honest. [laughs]

Brian:

Yeah, and I think as we continue to grow, is the model even sustainable? I don’t know how big we’ll grow, and I think that there’s going to be some limits there, but I think we’re in a great spot right now where we have more work than resources, we have a great leadership team managing all of that around who’s right for which projects, and which projects do we want to take on, and which ones do we have to turn down at this point because of lack of resources.

Tim:

Are you actively maintaining and promoting performance across that widespread diverse group of people?

Brian:

I think we’re starting. It seems to sort of crop up every now and then. I think when a negative experience happens, I think performance sort of pops up in the conversation, and I think I’m taking it on as sort of a personal effort to keep that conversation going throughout not only user experience but also with our IT organization.

So, while we have a very large user experience team, we don’t write the production code. So, our IT partners will be the one actually in the code, writing. So, they’re ultimately responsible for the front-end performance, and so I think that we are still trying to figure out culturally where does user experience play a role in front-end performance, knowing that we don’t have direct influence over that, that it’s ultimately our IT partners that are the ones that have to take that on and make that an important priority. I think that we’re still getting there.

Katie:

So, this is actually interesting. We talked to Etsy, which is a large organization. But I think Nationwide, it sounds like you don’t have your teams as closely sitting next to each other and able to talk about this stuff, so that’s an interesting problem to pose, of having the people that are developing it not being right next to you. So, what are the challenges of that? Do you get to talk to them regularly? Or is it kind of like, “We did this thing, and here you go”?

Brian:

I think it’s a lot of conversation. I think one of the biggest challenges is because there are so many agile lines working on different applications, and they may not necessarily talk to one another that often. So, we might have a success—we might have a particular team, let’s just say Nationwide.com, for example—the team that’s assigned to that, we really dive deep with them, we got them caring about front-end performance, they’re really hitting the mark. But that’s, as I said, at the scale that we are, that’s one team and one asset, and there’s another 100 assets and another 100 teams that we have to convince as well.

So, I think we’re sort of taking the guerrilla approach, sort of starting small, starting with one team and getting those small wins, and then that can sort of serve as a case study. One thing that I’ve noticed that really works well here at Nationwide is being able to have a good story to tell of success with one particular team, and other IT groups seem to latch onto that and be able to see the value of that. So, they give a lot of credit where you can be able to prove success in another area of the organization.

Tim:

I’m really interested in hearing that story, like what the story was that you may have been able to tell out of getting that team invested. But first, I’m actually curious: How did you get that team on board?

Brian:

I think each team is going to be a little bit different, depending on just how those teams are structured. Sometimes it’s better to go through official channels, and we’re going to make front-end performance a requirement and we’re going to really measure against these things, and we’re going to test against these things. But that kind of comes with a lot of extra overhead and cost, to be honest with you. If I went to our IT partners and said we need to start testing for front-end performance, and that’s something you’ve never done before, well, that’s going to be an additional cost and now it’s a harder conversation to have about trying to get that part of the process.

So, the other way is to just sort of build it into the process, sort of like what we’ve tried to do with accessibility, is we’re just building the web. I don’t want to chunk it into just code and then layer on accessibility, layer on front-end performance. I think it all just needs to be the way that we build the web. I kind of liken it to back in the day when we designed sites and tables, and then we started moving to divs and a better, more semantic way of coding sites. That wasn’t an additional cost, it was just the way that you actually built sites. So, I think that we’ve had more success in that route, is sort of at the developer level, not trying to go above them, have them start caring about front-end performance and just building it into their workflow, even though we may not necessarily be able to test it because of the cost associated with it, and the tools.

Tim:

So, once you had them sold on that—and you were saying that you had a story to tell—was there a story you were able to tell coming out of that first group that you’re able to now turn around to other groups and say, “Here’s what we did, and look what the impact was”? Have you come across that story yet, or is that still sort of an in-progress thing?

Brian:

Yeah, I would say it’s still in progress, and I think we’re realizing the different stories you need to tell to different groups. So, IT, we can have a tool-based conversation, a technology-based conversation and really dig into the weeds. But then if we think that we need funding in order to make the push through at a larger scale, now the conversation has to kind of move over to business, and they don’t necessarily care about performance budgets, and kilobytes, and HTTP requests. They’re going to care about different things around revenue and speed of market, things like that. And user experience—we start thinking about perceived performance and making sure that what we envision actually gets implemented. So, there’s a few different stories that need to be told and I think we’re still in the process of formulating those at this point.

John:

One thing that I’ve noticed is how we market it. For our business partners, you’re talking about SEO—they love SEO—and you start explaining that performance has a large impact on SEO now and they’re going to quickly perk up and take notice, and performance will get on the radar much, much quicker. So, I think we’re learning, as a department, on how to market these different issues—how to market user experience, or how to market this idea of performance—and really showing them what it’s going to do to the bottom line.

Katie:

So, you’re both, at least some of the time, if not all the time, UX designers. What are some of the things in your projects—Brian, you mentioned that you try to just do what you can for now and hopefully that spreads—what are some of those things that you are looking for as you design and take into consideration to try to make performance part of what you do?

John:

From our perspective, one of the things—I’m a visual designer as my primary role here—and it’s this idea of perceived performance compared to actual performance. I know within our department here, that’s something that we can really play around with; playing around with this idea of click states or animations, things of that nature, that gives the user the idea that something is going on. Because at the end of the day, we’re not actually responsible for any code that goes to production, so it’s a little challenging in that regard. So, I think we’re trying to figure things out still.

Katie:

How do you communicate that? I know it’s kind of hard for designers to abstractly be like, “Here’s an animation idea that I have!” What are some of those things that you communicate? Do you mock ’em up in Keynote or something?

John:

Yeah, so in the projects I’ve worked on, I’ve actually created prototypes that kind of speak to these different animations and interactive micro-interactions, and we can then bring that prototype to our IT partners and literally sit down and walk with them through what we’re expecting out of this animation or out of a loading screen, or any of that type of content element. So, we’re maintaining component libraries, we’re maintaining prototypes, and then we can use those to build in the requirements, and then we take that to our IT business partners.

Brian:

And I think sometimes the challenge with that is we don’t know what those wait times are going to be. While we’re in prototype mode, everything is lightning fast. But as you sort of move into the complexities of the applications that we’re building… I think marketing, website that is relatively static is one thing, but when you start getting into being able to manage your policy online, pay your bill, and all the complexities that come with that, we just don’t know in the early stages of our design process where are going to be the pain points of a particular… When you hit that continue button, we don’t know how many different systems that has to be touched in order to get the data back. And so that’s where, really when we’re embedded with a line during development, do those things sort of get revealed to us.

It’s sort of like, “Uh, when you click continue, it’s going to take about eight seconds. Should we tell the user that something is going on?” “Yeah, we need to come up with something for that.” So, I think it’s those types of things, where we just don’t know until—and I don’t even think IT necessarily knows until they actually build it and see what they’re dealing with as far as complexity of the back-end systems.

Katie:

And then they’ll come back to you and say, “Here’s some things that are lagging. Do you guys have ideas…?”

John:

Yeah, absolutely. So, for a current project that I worked on this past year, there were a number of services that could take upwards of a couple of minutes to get back, and so our basic approach was anything that takes more than four or five seconds, we need to let the user know that it’s going to take that long, and then we can write any kind of content surrounding it if it’s going to take even longer, so that we’re setting up the expectations before an interaction actually happens and so they’re not surprised when they’re not seeing a change as quickly as they think they should be seeing it.

Tim:

Wow, a couple minutes?

John:

Yeah.

Tim:

Ouch. I’m curious, did you do any experimentation or testing around that four to five second notification? Or how you might display that content? Any sort of A/B experiments or anything just to figure out how to best let the user know that “this is going to be a while and you might want to buckle in for a bit”?

John:

Yeah, I think some of the guys on our team are working in this idea of faking wait times in these prototypes, and so when we go to test that the user is not actually having the best experience, where we’re trying to maybe say, “Hey, these wait times can be a number of seconds, so let’s build that into a prototype and see how the user reacts to it and see what kind of content or design interaction that surrounds that wait time that we’ll need to think about it,” so that we can test it. And we test it later on for validation.

Brian:

Yeah, I think that’s spot on, John. Traditionally—it’s only been within the last six months or so that we have started building in these artificial wait times on the prototype when user testing. So, we’ve got a testing lab here at Nationwide, that almost just about everything that we design gets tested with users, which is just an amazing luxury that we have here that is just super-awesome that we have that ability to do. But I think what we’re trying to do is with prototyping, with the creative technologist role on the UX team, being able to build out responsive prototypes so that we can take those into user testing, and now we’re sort of layering on top of that, building in more realistic scenarios that we’re able to test with users and get their feedback on…

There’s a project right now where the wait wasn’t several minutes, but it was sending a lot of data to a lot of different places, and it takes a while to get back, and so we’re playing with different animations, different content. Since it is such a long wait, do we display a button telling them, “Okay, we’re done, now go on to the next one,” or do we auto-advance them to the next page when it’s done. So, we’re able to test those types of things in the lab and be able to get feedback from users and then hopefully influence the build space.

Tim:

That’s fascinating. I think that’s one of the areas where we could really use a lot more exploration and data around, honestly, is just kind of that whole, “How do you manage the user expectations and how do you improve the perception of things when you’re operating at that sort of a delay?” What do you do and what works? I’m curious how far along that experiment is, because frankly I would love to hear the results.

Brian:

Yeah, so I think we just wrapped up testing this week, so I don’t have any results yet. But I think that to be able to manage expectations in the lab, and be able to get feedback from users, that it is going to take a while—which, honestly, probably even within the last year or so that we wouldn’t have known. So, we would have just built a prototype where you hit submit and then within less than a second later, because it’s a local host, everything magically appears. And now because we’re working better and closer with IT, and being able to understand… A key part of the design process is meeting with our IT partners and talking with them about various screens and the complexities that might happen. And so, we’re able to just now, within the last few months, be able to start building that in to be able to give a more realistic testing scenario for our research team.

Katie:

Yeah, I think this is a really interesting perspective, where it’s kind of like it is what it is, it takes a couple of minutes for some things, so, “How can we make that better? Because we can’t necessarily make it way faster.” So, I think that’s awesome that you’re figuring out how that perception can be better, and going a different route about performance rather than, “Let’s just rewrite everything and make it faster!” “Okay, let’s manage those expectations to kind of get around that.”

Tim:

I also think that’s wildly important, like that step of making the user testing scenarios more realistic and introducing those artificial delays to mimic reality. So many times when I look at a site that comes out and it’s obnoxiously slow on a 3G network or whatever it happens to be, the first thing that crosses my mind is, “There’s no way anybody within the organization looked at this in a realistic environment, because it would have never gone live.” So, I absolutely love that you’re prototyping performance.

Brian:

Yeah, I think we’re excited about it, and I think that as we get smarter about understanding what’s happening on the back-end… Going back to those conversations that we’re having with IT, I think that one of the hurdles that we have is the difference between front-end performance and back-end performance, and so we kind of go back to those—and I don’t want to keep using “minutes,” I think that’s a rarity, but there could be seconds—but in the front-end performance world, it’s milliseconds. Having that conversation with IT and making them care about milliseconds instead of seconds I think is going to be sort of challenging as we continue these conversations.

I think the story seems to be, “Let’s not make it any worse. We know that you guys are working on the back-end…” That’s what they’re saying, that they’re continually focusing on the back-end, let us focus on the front-end as much as we can, and then hopefully we’ll meet in the middle and make a better user experience for our customers.

Katie:

How in-depth do you get with devices, and do you kind of fake different connection speeds for mobile, like, to mention 3G? Do you have different artificial delays and inflation—different levels of it? What all goes into it?

Brian:

I know, for the one test that we’re currently running, that we do have a randomization of the wait time, and so that’s something that we’ve started adding. Unfortunately, we don’t know what that wait time will be, so it’s not really a controlled environment where we say, “Well, this user is going to have a 3G experience and this user is going to have an ethernet experience,” or what have you. So, what we have sort of said is, “Well, this wait time could be between 45 seconds and a minute,” so we randomized that wait time and each user is a little bit different. But I think it is an interesting thought to be able to control and throttle speed based on different connection speeds.

As John was talking about, perceived performance: I think that one thing that I am personally hoping to start getting to in the lab is being able to more closely mimic the page load. There’s all sorts of rest calls and ajax and things, and so pages don’t just appear within those seconds. You might get the header and footer instantaneously, but now for the middle of that page we have to go out and make several back-end calls and pull that data through JSON and what have you, and that takes a little bit longer.

I think it’s going to be interesting as we’re starting to use more handlebars and being able to more componentize our prototypes even during the prototyping phase, if we’re able to throw the header and footer instantaneously and then wait one or two seconds and throw up the content page—I think that’s an interesting idea that we aren’t currently doing. But it’s something that interests me and I think it’ll be interesting to see if we’re able to bake that into future tasks.

Katie:

Is it almost like media queries but for time delays? Like, “If it’s going to take this long, we’ll present these things to help the perception, and if it only takes this much, we might just do one of those things.” I don’t know if that’s totally a stretch, but just the way that testing could be… I don’t know.

Tim:

No, I follow that. So, like different solutions for different time delays.

Katie:

Mhm.

John:

Yeah, we haven’t explored that within our testing. And to be honest, I think a lot of that is we try to be as realistic as possible in terms of what we have in our prototype to actually what will be live once it goes through IT. So, I think we’re kind of hesitant to do a lot of that just simply because of how often that’s going to be the first thing to go, in terms of if all of a sudden we know we don’t have the money that we need or the project is going long. So, getting really, really specific can get really tough. It’s great to test with, but at the end of the day I think something that specific probably wouldn’t make it to the…

Brian:

To the finish line, yeah.

Tim:

So, something like that is something that might happen maybe 6-9 months, a year down the road when you’ve built up—hopefully, if things go extremely smoothly—a culture for performance that spreads through several teams, and management levels, and stuff like that.

John:

Yeah. And as we grow, and as more of our business partners find out about us, the sooner in the project life cycle that we’ll go in to get engaged, and things like performance can start being talked about much sooner in that discussion.

Brian:

I think a huge hurdle that will really help, and we’re just at the beginning stages of this, is just the way that these project life cycles occur, that there’s just that big redesign phase. A lot of these applications may not be touched for a number of years, and then they get redesigned, and then they just kind of—I don’t want to say hibernation, but they just kind of… “Okay, we’ve done the giant redesign. Now let’s just keep the lights on.” I think that we’re getting closer to being able to have a story to tell around needing the money to be able to work on performance continually.

I think that I’ve had numerous conversations with IT leadership around not necessarily front-end performance, but you think about browser testing. Not that long ago, there was Internet Explorer, Firefox, and maybe Chrome. If you go back a few years before Chrome was even around, the churn of these versions of browsers just wasn’t that great, and so IT was able to build something once and just sort of assume that it would be fine. But now Chrome releasing every six weeks, iOS coming out every year, Android coming out with a fragmentation of Android…

So, I think that those conversations are helping keep the build cycle going and that we need to continually test. I think the next phase of that is being able to convince IT that not only should we be browser testing, but we also need to be performance testing and accessibility testing as well.

Tim:

Yeah, it’ll be interesting to see. It sounds like you’re fairly early on in this still, so do you have any ongoing monitoring in place yet? Something like SpeedCurve or a New Relic? Something like that that’s at least, if not actually enforcing anything, just kind of keeping a baseline of where you’re at.

Brian:

We are definitely early, and the challenge—and again, I haven’t had a chance to dig into those different products—but so many of our applications are behind a login. So, you have Nationwide.com, and we have a handful of assets or websites that you can just go to in Spider that are publicly accessible. But pretty quickly, like I said, the mass majority of our projects are either internal-only—you can’t even access them outside the firewall because they are just an internal asset—or behind a login.

So, I think that those are challenges that I think some companies probably have figured out; I imagine that they are of a scale that they are catering towards enterprise companies, that they have figured out those types of things. But I just haven’t dug deep enough to understand those capabilities around being able to send a fake username and password that actually has data associated with it behind a login.

Katie:

You’ve mentioned that your team, your mini-agency as you called it, is still fairly new. Do you all share the same view of performance, or are you still kind of getting other people to think about it in their work? What are some of the wins or challenges with doing that?

John:

I think, luckily, our UX department as a whole, the user experience department, has the same viewpoint that we do and we’re not in a situation where we’re having to twist the arms of our own leadership to get them on board with performance. So really, I think we’re trying to equip them with ways of going about influencing their business partners to get on board with that.

I think we’re in this educational realm where we’re trying to educate our leaders and get them on board, so I think we’re still figuring out how to do that. Because it’s, as we’ve said, it’s still so new with our department and performance as a whole really, as well.

Brian:

And I think that we haven’t gotten to the maturity point yet where we would say we are not going to design a page with a carousel, high-res images and something else. Because we don’t know what the performance budget is going to be early in the design phase, right? We can prototype it and we have an idea, but I think that so much of it is reliant on our IT partners to execute those things in a smart way, where if we only load the carousel that’s at the bottom of the page after the user scrolls, now that’s sort of a different conversation.

So, I think that making sure that IT cares enough about it that we can be a little bit more liberal in our design decisions and trust that they’re going to be able to execute them in a way that is performance-minded vs. having the worst case scenario where we don’t know what’s going to happen, so we’re just going to design something that we know can be built in a way that has performance in mind.

Tim:

Well, this all sounds fascinating. We haven’t talked to anybody at this scale. We haven’t, I don’t think, talked to anybody who is at the stage that you’re at, where you’re still getting things off the ground and you’ve got this long road ahead of you in terms of selling it within the company. So, it’s really interesting to hear the challenges you’re facing, and the things you’re trying and things that aren’t working. But it also feels honestly like something maybe nine months down the road Katie and I should probably call you back up and get you back on the podcast to see how it went. “So, how did this go now that you tried all this stuff?”

Brian:

Yeah, I think nine months sounds about the right time frame to move the needle a little bit. [laughs] I’ve learned that getting things done quickly can be challenging. And there’s always those priorities, right? So, I know that we were working on a project and something security-related came up, and they needed funding in order to make something more secure. So, I think that when you think about our offering of insurance, financial, banking, I think that security is almost always going to take precedence. So, it’ll be sort of interesting to see—I think that we just kind of hope for the best as things move on and hopefully no one slots out in front of us and that we’re able to continue the conversation and steer the ship a little bit.

Katie:

Yeah, it seems like you’re doing an awesome job already of getting at least the testing and everything in place, and your team is thinking about it, so that’s awesome.

Tim:

Yeah, that’s great. So, we’ve got you penciled in in nine months. We’re going to come back and we’re going to expect to hear incredible stories and case study metrics. Does that sound fine? We’ll hold you to it here publicly. [laughs]

John:

It sounds good.

Tim:

Well, thank you so much for being on the podcast, John and Brian. If people want to follow along afterwards and check in on you over the course of the next nine months, where should they go? Are you guys on Twitter? Do you write blog posts on anything that you’re doing?

Brian:

So, I’m @BrianWGreene on Twitter. Unfortunately, I don’t have a blog. And I think it’s a little unfortunate—when you think about the nature of Nationwide, not being an agency; although we’re an internal agency, we’re sort of sitting inside a pretty conservative company. We’re starting to think about being able to put out some things, whether officially or unofficially. So, I think we’ll see how that moves as well, if Nationwide starts thinking that it’s in our best interest to be able to start communicating more externally about the things that we’re doing.

John:

And I have to call out Nationwide, so obviously Nationwide.com. And then I’m on Twitter as well @JohnShermanHoyt.

Tim:

Fantastic. And yeah, it would be awesome to see Nationwide being very vocal about some of the things they’re doing, because it sounds like you’ve got some interesting things coming up. So, yes, let’s hold our breath for a Nationwide dev blog.

Brian:

Exactly. [laughs]

Tim:

Well, thanks guys.

Katie:

Yeah, thank you both so much. This was awesome.

John:

Thanks a lot. Have a great day.

Brian:

Yeah, thank you guys.

Tim:

Thank you for listening to this episode of the Path to Performance podcast. You can subscribe to the podcast through iTunes or on our site, pathtoperf.com. You can also follow along on Twitter @pathtoperf. We’d love to hear what you thought, so feel free to drop us a note on Twitter or leave a raving and overly-kind review on iTunes. We like to read those. And if you’d like to talk about being a guest or sponsoring a future episode, feel free to email us at hello@pathtoperf.com.