Leigh Tami is currently the Director of Data and Analytics for the NYC City Parks Commission, after four years of leading groundbreaking city data initiatives in Cincinnati.
How did you get into doing data work?
After law school at the University of Cincinnati, I didn’t really want to practice law and fell into data analytics work by accident. I was interested in working for the Office of Performance & Data Analytics in Cincinnati. That’s where I really picked up programming, and I learned some basic things like Excel. I also learned a lot about how city government works. As soon as my then-boss announced that he was going to be leaving for another opportunity, I stepped into the interim role and realized, oh, this doesn’t need to be tracked with Excel spreadsheets… I took over the role permanently and then within six months, which was very, very fast, we built out our data analytics infrastructure.
What was one of your biggest successes in that role?
The heroin overdose response tracker. It was successful because it did something with data that hadn’t been done before. It took computer-aided dispatch data or 911 call data and used it as a proxy for where we were seeing fentanyl spikes. You can see how things shape by neighborhoods; you can see how they shift by day of the week, time of day.
As a result of that, we had a number of community groups and health groups who started using our tool to figure out which neighborhoods to target for things like Narcan training. Now I think that’s something that’s happening in a lot of cities; at that point, it wasn’t really happening anywhere.
It’s something that can be super easily replicated anywhere… I think Tempe, Arizona has been using our model. Northern Kentucky started using it, so I’m really proud of that.
It sounds like you knew that you wanted to have a certain data set to help EMS, but then that data became valuable to other people in other ways. Was that something that you anticipated?
We didn’t do any research to drive that. I think we realized we had something because even when I would show it to internal administration, everyone was clearly like, oh, my God, this is something we really want to know.
This informed my philosophy about government data overall, which is that the questions the government asks itself about performance––how are we doing, how many of this, or where are we having issues––are really the same questions the public has, too. It might be framed slightly differently. They might be asking it through the lines of: what’s going in my neighborhood, is my neighborhood doing well or not doing well compared to other neighborhoods, or what kind of crime happened in my neighborhood last week. That’s why we were able to build tools using open data that spoke to resource optimization and strategic deployment of resources, but also gave people something that they could understand. For me, that was the linchpin of the operation. This is something that doesn’t just speak to one group; it’s something that’s usable by all ranks.
Can you talk about how you look for what you’ve called high-value data?
Essentially, there are a lot of systems that exist already that have tracking mechanisms built in that generate data based on what we’re doing. My approach has always been to find those first and piggyback off of whatever the workflow is because those are always going to be more accurate, primarily because the job isn’t just collecting data; the job is accomplishing something that has some kind of tangible consequences or results. People know if a cop doesn’t show up. People know if they don’t get paid. People know if you don’t get reimbursed. For that matter, people know if a pothole didn’t get filled.
What I realized, especially in my work in Cincinnati is that data is already generated pretty often just because something happened. With the computer aided dispatch data, this data is generated because we had somebody call 911. They explained to the call taker what the issue was and then we had to dispatch resources.
In that case, the quality of the service informs the quality of the data which is really key, and it’s true in a lot of other scenarios. The incentive to get it right, to send the right thing is really high, which means that the quality of the data is really high.
The same thing is true for financial transactions. It’s not just data: it’s their money, right? You want to make sure you get that right because if you don’t, you’re not going to get your money.
What brought you to New York?
I was in Cincinnati for almost four years and I could feel myself starting to become part of the wallpaper. I felt myself not having as much energy to fight some of the battles or to be constantly moving and innovating. I’d been thinking about trying to get to New York for a while. My only apprehension was I wanted to have the power to move things forward. I want to be able to push back on other things if they’re in the way. [My now-boss] basically said that’s what I’m looking for. If you come work here, you’ll have that.
What are some of the big projects that you’re working on?
When I got here, I realized I have this unbelievable team: these people are brilliant, they’re driven, they are creative, but a lot of them have been manually cleaning data. There wasn’t an infrastructure that was built out to support automating a lot of the things they were doing.
So we’ve been pushing data analytic infrastructure, building out the data warehousing effort and figuring out what we need from our data. Once we have that, we’re going to be building out some public facing web tools, dashboards and also start building more models internally.
You talked a little bit about how in Cincinnati—in a smaller place—it’s just easier to make things happen. Have you found pros and cons?
In Cincinnati there weren’t any bureaucratic barriers already existing… We were able to build something and make it sustainable before there was organized resistance.
The disadvantage is that when you are doing everything yourself, you are doing everything yourself. I had a really, really small team in Cincinnati. We pretty much operated on our own. As much as I really can believe in free and open-source tools, we really couldn’t do that in Cincinnati. We had to find commercial tools, duct tape them together and get them to work for us. It definitely limited what we could do and how we could do it.
Here, while I have more resources, I have a bigger team and they have a whole different skillset, the level of what we’re able to accomplish is massive, but the institutional barriers are there because someone else has tried versions of this before.
Do you have a vision or dream projects that you would like to see happen in the future?
We’re going to be creating some things that are public facing. My dream for any of this is two-part. Number one: my goal for whatever I’m doing is to make it something that runs without me, that is self-sustaining. In government, publishing things on the Internet is often the best way to do that because it means that you’re not just relying on people. I’ve got such a talented team, I think we can do some really cool stuff in terms of building models, building some applications. Right now we’re in the stage where we’re unearthing all this data. We’re digging it out, we’re figuring out what we have, and we’re trying to make it as usable as we can, as quickly as we can. This is, in my mind the hard part. It’s figuring out what you have and building the pipes to get it out. Once you do that, it’s time.