How Katana Video is Revolutionizing Podcast Editing: AI-Powered Solutions with Sam Bhattacharyya


Are you frustrated with the headaches of editing Zoom podcast recordings? Wish you could turn raw footage into polished video podcasts—without hours of tedious work or a big budget? This episode is for you.
In this episode, host Mathew Passy sits down with Sam Bhattacharyya, CEO of Katana Video, to explore how AI is transforming video podcast editing—especially for creators working with Zoom recordings.
Whether you’re a seasoned podcaster or just starting out, you’ll learn how Katana Video’s AI-powered editing brings pro-quality video podcasts within anyone’s reach—no editing background required. Check out this episode to hear Sam’s story, understand common editing pitfalls, and get actionable strategies for making your next podcast sound and look its best, fast.
Episode Highlights:
Sam’s Journey to Video Tech Innovation: Sam recounts his unconventional career path, starting with building an e-learning platform for West African students, and how his early work in video compression and AI-powered streaming tech led to forming partnerships with Streamyard and eventually, the launch of Katana Video. [00:01:11]
The Real Problem with Editing Zoom Podcasts: Why so many podcasters still use Zoom, despite its limitations, the challenges editors face with Zoom’s single-track recordings and how Katana Video bridges the gap: using AI to separate speakers, automate camera angle switching, add name tags, and create professional-grade visuals with minimal effort. [00:07:36]
The Future of AI in Video Editing: Sam’s perspective on the current state of AI tools like Opus and Descript, and why true “creative” edits are still a human domain, but automation can handle repetitive, standardized editing tasks. [00:17:53]
.
Trends & Tech Wishlist for Podcasters: Sam encourages creators to focus less on terminology and more on making content accessible and valuable, and his wish for smarter AI quality control, so creators only get surfaced with high-value, relevant clips. [00:23:17]
Resources & Links:
- Katana Video — Try Sam’s platform for editing Zoom recordings into video podcasts.
- Connect with Sam Bhattacharyya on LinkedIn
- Subscribe to Katana Video Youtube page.
Podcast Recommendations from Sam:
- Dwarkesh Patel Podcast — In-depth conversations on AI and technology.
Stay Connected:
- Visit podcastingtech.com for weekly episodes, insights, and podcast news
- Enjoying the show? Leave us a rating and review
**As an Amazon Associate, we may earn commissions from qualifying purchases of podcasting gear from Amazon.com. We also participate in affiliate programs with many of the software services mentioned on our website. If you purchase something through the links we provide, we may earn a commission at no extra cost to you. The team at Podcasting Tech only recommends products and services that we would use ourselves and that we believe will provide value to our viewers and readers.**
For additional resources and insights visit podcastingtech.com or follow us on social media:
- Instagram: @mathewpassy
- LinkedIn - /mathewpassy
- Threads: @mathewpassy
- Twitter/X: @mathewpassy
- Facebook - /podcastingtech, /mathewpassy
PODCASTING TECH IS POWERED BY:
- Captivate - Easy, professional podcast hosting to create, grow and make money from your podcasts
- Podpage - Build a beautiful podcast website in 5 minutes
- Riverside.fm - Record Podcasts And Videos From Anywhere
- Castmagic - 10x Audio Content With AI
- Podmatch - Matching Hosts and Guest for Podcast Interviews
- Hostinger for Website
- Podgagement - Engage your audience and grow your podcast!
EQUIPMENT IN USE:
- Rodecaster Pro 1st Gen (No longer available). Consider the Rodecaster Duo or Rodecaster Pro II
- EV RE20 with 309a Shockmount
- Rode PSA1+
- iPhone continuity camera but previously the Logitech Brio 4k
- DCMEKA In-Ear Monitors
- BusyBox Smart Sign
Welcome to Podcasting Tech, a podcast that equips busy entrepreneurs
Speaker:engaged in podcasting with proven and cost effective solutions
Speaker:for achieving a professional sound and appearance. I'm Matthew
Speaker:Passe, your host and a 15 year veteran in the podcasting space.
Speaker:We'll help you cut through the noise and offer guidance on software and hardware
Speaker:that can elevate the quality of your show. Tune in weekly for
Speaker:insightful interviews with tech creators, behind the scenes studio tours and
Speaker:strategies for podcasting Success. Head to podcastingtech.com
Speaker:to subscribe to this show on YouTube or your favorite podcast platform and
Speaker:join us on this exciting journey to unlock the full potential of your
Speaker:podcast. Going to take you down to Houston. Today
Speaker:we are chatting with Sam Bhattacharya. He is the CEO of
Speaker:Katana Video. That is Katana, like the
Speaker:Blade video and currently the platform
Speaker:is here to auto edit zoom recordings to turn them
Speaker:into video podcasts. Something I'm sure that as people are hearing this,
Speaker:are thinking, oh yes please Sam, thank you for
Speaker:joining us here today. Thanks for inviting me. Before
Speaker:we talk specifically about how Katana works,
Speaker:just tell me a little bit like how did you get interested in
Speaker:streaming media? Like, you know, why are you working on
Speaker:content platforms? You formerly worked with Streamyard, but like
Speaker:what drew you to this industry specifically?
Speaker:Yeah, so let me start most recently and then get started with how I
Speaker:got into video in the first place. So most recently I used to work
Speaker:as the head of AI for Streamyard, which is a similar platform to Riverside that
Speaker:we're using, a little bit more focused on the streaming aspect.
Speaker:And there we got into kind of
Speaker:AI and editing features which is how we got into some of
Speaker:this world, how I got into Streamyard and how
Speaker:I got into this world of video streaming. I have an interesting story
Speaker:but I'll try to keep it short. So I originally started my
Speaker:career, I've never actually worked for a real company or I mean
Speaker:streaming is real. I didn't kind of like apply for real. Like I have a
Speaker:bit of a non traditional career path. So right out of grad school
Speaker:I started a company, my own company and the idea was to
Speaker:make an E learning app for students in Sub
Speaker:Saharan Africa, specifically in West Africa. And my
Speaker:co founder was from Nigeria. He had studied there and then gone to
Speaker:study in the US for grad school. And we both saw this problem of
Speaker:the Internet not being very accessible for taking things like online classes. So our
Speaker:idea was kind of like Khan Academy but for students in West Africa.
Speaker:Despite my parents hesitations, I moved to Ghana And
Speaker:Nigeria for a year. With my co founder, we built an app to help students
Speaker:study for exams, like a Khan Academy. We did the thing.
Speaker:We built an actual kind of exam preparation app with online
Speaker:courses. And a big part of what we had done was we had built some
Speaker:interesting video compression technology to make online
Speaker:courses very accessible on really, really slow Internet
Speaker:connection so that you could watch or use like an online video course on
Speaker:a 2G connection. And in those countries, I
Speaker:think still to this day, you pay per gigabyte. Right? Right. So thinking about,
Speaker:you know, like, or being paying per megabyte, like, if it costs you
Speaker:like $2 to watch an online course just in
Speaker:bandwidth, like, you're. That's. That's a barrier. Right. Especially for people
Speaker:in those countries and students, of all people.
Speaker:So that was a big thing that didn't work out as a business. We had
Speaker:users, we had actually about 50,000 students studying for our
Speaker:exams using our app. But it was hard to monetize and get anywhere near
Speaker:covering our costs. So we eventually shut that down and
Speaker:made the content free. And then we moved back to the US and tried to
Speaker:license our technology to other companies.
Speaker:We actually created this, like, patented video compression technology.
Speaker:Like, no joke, we got in front of the right
Speaker:people at YouTube and Netflix. Like, not without
Speaker:exaggeration. Like, we actually got in the doors with those people and quickly realized
Speaker:that while we had an interesting idea, it wasn't practical to deploy at scale at
Speaker:any real kind of video streaming
Speaker:platforms. A couple more pivots.
Speaker:We eventually ended up creating AI for
Speaker:video streaming and video conferencing. So we
Speaker:had specialized in making features like virtual backgrounds and background noise
Speaker:removal. And then during the pandemic, we had gotten in touch with
Speaker:Streamyard who ended up using our technology for their own
Speaker:platform. And then they ended up acquiring us. And that's kind of how
Speaker:we got into Streamyard and like video streaming, it was just kind of an
Speaker:accident. But over the years, I'd built up experience around kind
Speaker:of video, video processing, especially AI as it
Speaker:relates to video. As someone who
Speaker:created a platform and then kind of like realized that the video compression was
Speaker:the big tool here and, you know, then was able to get in front of
Speaker:YouTube and Netflix and then go work for Streamyard. Like, do you do a lot
Speaker:of content creation yourself? Do you work with video or.
Speaker:Really it's just you figured out this problem and the coding itself
Speaker:is really where your passion lies. I think I'm much more on the
Speaker:coding side than the content side, though. We had actually,
Speaker:I Made some of our first online courses when we did that
Speaker:app. But we quickly realized that it was better to actually hire real
Speaker:teachers, especially from Ghana and Nigeria, to make those courses. So part of
Speaker:it was that was objectively better to have actual teachers
Speaker:making that content. And then more recently, you know,
Speaker:I just kind of been stringing along. I had started making
Speaker:content in when I was in Streamyard as a way to
Speaker:generate empathy for the people who are making content on Streamyard. Like, so I was
Speaker:like, you know, I was a product manager at the time, and I started
Speaker:making content to empathize with people who were using Streamyard to create
Speaker:content. But I actually kind of fell in. Like, I, I, I,
Speaker:I, I grew to like it the same way. I actually taught
Speaker:myself to code. So I figured, like, why can't I teach myself to make content?
Speaker:It's very different to put your code out on the stage than it is to
Speaker:put yourself out on the front stage and, you know, get that kind of feedback
Speaker:that is required. Yeah, well, you know what,
Speaker:that's true. And I am incorrigibly technical,
Speaker:if that makes sense. But one of the things that I realized that makes me
Speaker:kind of a bit different from most normal engineers is because I did a startup,
Speaker:because I did a lot of this stuff, that you're inherently putting yourself out there
Speaker:anyway. And so I had been no stranger to making pitches, to trying
Speaker:to convince people, you know, it was a different kind of use case. I was
Speaker:trying to convince people, hey, funder platform to help students and, you know,
Speaker:Ghana and E. Sorry for the exams. But at some point I was still making
Speaker:pitches, going on stages, trying to convince people, like, what we're doing is
Speaker:interesting. And that didn't seem that different from
Speaker:putting yourself out there online. It's just, you know, you have
Speaker:to, maybe when you're building a show, for example,
Speaker:treat it a bit more like an actual product. Like, who is your audience? Why
Speaker:are they interested? It's not like, so transactional as, like, you know, you
Speaker:donate to my company or invest in my company or whatever. But
Speaker:there's a sense in which I had gotten used to
Speaker:talking to people about what I was working on and trying to convince people to
Speaker:be excited about what I'm working on. That already kind of came into the door
Speaker:when I started making content. Gotcha. Okay, so
Speaker:tell us more about the actual Katana video platform.
Speaker:So how does it work? How do we sign up? You know, what can we
Speaker:expect? What problem is it solving for us? Yeah,
Speaker:so let's start with the problem that it's solving.
Speaker:So when I started working with
Speaker:Streamyard, I learned that many people make podcasts, obviously.
Speaker:But then the number one alternative that people used
Speaker:for recording podcasts on Streamyard
Speaker:was not Riverside, it was Zoom. Okay? So most
Speaker:people who start with podcasting start out recording
Speaker:on Zoom. And there's a reason that platforms like Riverside and
Speaker:Streamyard exist, because Zoom is not built for recording
Speaker:podcasts. And one of the main things that
Speaker:makes it hard to work with Zoom and turn that into a video podcast is
Speaker:that Riverside and
Speaker:similar platforms like Streamyard will give individual
Speaker:recordings, high quality recordings for each person
Speaker:doing the interview. And that's super important with editing to be able
Speaker:to do, for example, camera angle switching and knowing who's speaking
Speaker:when. And editors
Speaker:generally dislike working with zoom recordings because zoom
Speaker:doesn't give you that. Zoom just puts everything together. It's a nightmare to work with
Speaker:those zoom recordings and people work around it.
Speaker:So people will put like overlays on top of a zoom recording. But I
Speaker:had a pretty deep background in computer vision
Speaker:and kind of like old school AI
Speaker:and I figured, well, I mean, there's no reason you couldn't actually figure
Speaker:out like who's speaking when it just takes some upfront effort and work.
Speaker:And so the simple idea was, okay, well, let's
Speaker:actually take a zoom recording and then
Speaker:extract the video and audio and separate them as if they were
Speaker:local recordings. Okay? Then you could do things like multicam and
Speaker:camera angle switching and adding like name tags and all the stuff you would normally
Speaker:do. All the visuals you would normally put in a normal video
Speaker:podcast. It would be much easier once you knew who was speaking
Speaker:when, right? So that was very simple idea. Like, what if we just didn't
Speaker:fight this idea of people are going to use Zoom, get them to use Riverside
Speaker:or Streamyard or some other platform like that. What if we just met them where
Speaker:they were? You're using Zoom. Okay? Whatever reason you have to resume, there's still valid
Speaker:reasons for using Zoom. And so let's make
Speaker:it easy to make a zoom recording look like it was recorded and edited in
Speaker:a more professional platform like Riverside. And that was the high level.
Speaker:So when you go to Katana Video,
Speaker:the goal is to make it easy to turn
Speaker:your zoom recording into a video podcast. And a big aspect of that is
Speaker:automating those visuals of like the multi camera angle switching and whatnot
Speaker:so that within like five minutes you have something that looks like a
Speaker:professionally edited podcast with a lot of the
Speaker:visuals that would normally be done with
Speaker:tools. Like Riverside or Descript or whatnot. And then
Speaker:I'm also trying to make sure you can have a lot of the actual edits
Speaker:and cuts done. The basic ones, not like very, very artistic, but the basic
Speaker:ones, like making sure you cut off the recording before the interview actually
Speaker:starts and cut it off like after it actually ends. Because, you know,
Speaker:there's, there's one of you make a recording. There's like the, the pre
Speaker:interview stuff and the end of the interview stuff as well as like detecting the
Speaker:obvious like outtakes that you would find. Like, you know, can you cut this part
Speaker:out so that someone who's just getting started with podcasting can just get something
Speaker:that's out of the box, like is better
Speaker:than, better than what they started with for very little
Speaker:effort. It's not to the level of a professionally edited podcast by any
Speaker:means, but it's certainly better than what you started with.
Speaker:And the idea was to get something
Speaker:passable in five minutes. And so that's kind of like you just upload
Speaker:a zoom recording and you get like a
Speaker:good looking podcast in five minutes or 10 minutes. So what does it
Speaker:do? Like, what is the AI doing? What is it looking for? Are
Speaker:there things that we should be doing when we're recording
Speaker:to make the AI's job easier?
Speaker:Yeah, so. Well, one, everything's a work in progress, so I will be improving these
Speaker:algorithms as we go. But one is in terms
Speaker:of the core, who's talking when.
Speaker:The only thing that seems to mess up is when two people are talking exactly
Speaker:at the same time. And that kind of, I mean, can you really
Speaker:blame, you know, even as an editor you would have a hard time.
Speaker:So in that, in that, in those circumstances we just default to showing both people
Speaker:at the same time. Just like you don't highlight one of them when two people
Speaker:are talking at the same time. That's it.
Speaker:I think the idea is very much like you don't have to do anything specific
Speaker:to make the jobs easier. Like there's just like some edge cases that
Speaker:I'm finding that I need to handle
Speaker:for. So someone had an intro section that
Speaker:they recorded at the end of their podcast. We're like, oh, we forgot to do
Speaker:the intro, let's do it at the end and then we'll
Speaker:fix it in editing and post production. And that's like such a normal
Speaker:natural thing. But that kind of messed up my very simple algorithm that like assumes
Speaker:that the start happens before the end, if that makes sense. And that just
Speaker:kind of messed that, that whole thing up. And so, I mean those are edge
Speaker:cases that you'd want to handle gracefully in the future. But
Speaker:the idea is like you don't have to do anything special. Okay. And then
Speaker:once it's done, is it
Speaker:take it or leave it or can we take what you're
Speaker:creating and then, you know, bring it over to one of our editors and you
Speaker:know, make some finesse edits or you know, maybe tweak a few things here or
Speaker:there to get it where we want it to be. Well, so it's,
Speaker:it's got a built in Transcriptus editor
Speaker:built in. I, I consciously didn't make
Speaker:this a kind of. You can export this to Adobe and maybe I will in
Speaker:the future. But the use case that I was looking for
Speaker:was targeting a different segment. So there are plenty of
Speaker:very nice editor softwares out there like Descript and you
Speaker:know, Riverside, you can edit. But this was definitely designed for the
Speaker:people who wouldn't otherwise have
Speaker:or use those editor softwares or hire someone that does have those editor software.
Speaker:So I made sure, I spent a lot of effort just making sure that it
Speaker:works by itself. So I have like this own style of stack. So it's
Speaker:basically like you can tweak it, especially on the adjustments, like the branding, the look,
Speaker:the feel, the custom and you can, you can edit it based on transcript based
Speaker:editing and then it renders, you know, it
Speaker:has a full rendering stack and whatnot. But I haven't quite put in like you
Speaker:can export like the RAW project file as an Adobe Premiere
Speaker:profile or something like that. Maybe I will in the future, but I
Speaker:don't feel like I have enough like of the auto edit stuff yet that it
Speaker:makes sense to kind of do that. And I do want to improve the
Speaker:auto edit capabilities in the future. All right,
Speaker:so how does someone get started, right, like what are the pricing plans? Like what
Speaker:does it look like to work with it? Do we have to upload? Is there
Speaker:an integration with Zoom built into it? What does it look like for
Speaker:someone who's hearing this and wants to try it out? Yeah, well,
Speaker:so first it's Katana Video, that's the address
Speaker:and it's free right now. You couldn't pay me if you wanted to because I
Speaker:am still in beta and figuring things out. And the
Speaker:idea behind it was to have a free option that's always
Speaker:available. And the high level idea behind the free
Speaker:option was you can make your zoom recording look good
Speaker:and it'll have all of the camera angle switching and all of those like branding,
Speaker:customization, Options just out of the box for free forever. And
Speaker:there will be a play plan which has some additional auto edit capabilities.
Speaker:So in terms of auto edits, I think the idea is like
Speaker:to give you everything you need to get a
Speaker:raw recording to something you would happily upload on YouTube. Like
Speaker:there's a couple more things you'd need to do and one of them is like
Speaker:generating a really nice catchy intro, for example. That's one of the big things
Speaker:I'm focusing on right now. And so if you look at professionally edited
Speaker:podcasts on a video show on YouTube, they'll often
Speaker:have an intro section which is like a catchy
Speaker:back to back compilation of sound bites. Sometimes with effects.
Speaker:Like they'll zoom in on one of the speaker's faces, maybe they'll
Speaker:highlight some words in the background. At the most extreme end
Speaker:you'd see stuff like Diary of a CEO that's a bit extreme for what an
Speaker:automated tool could do at this point, but you have like less
Speaker:extreme versions of that where it's like, it's like a catching intro. So that's the
Speaker:kind of thing that would be on the paid plan. And so you would kind
Speaker:of generate one of these like intro teasers as part of the paid plan as
Speaker:well as like social media clips. Like the clips. So there's that
Speaker:functionality that would be on the paid plan primarily, but then the core
Speaker:just making your zoom recording look good. That's free
Speaker:forever. Forever. And yeah,
Speaker:I mean it's, it's also. Well, as long as Katana Video
Speaker:exists or whatever, like, you know, how am I supposed to know what, what
Speaker:things are going to look like in 20 years? But where do you think AI
Speaker:is going in terms of this stuff? Like, I love the idea that there
Speaker:are aspects of content
Speaker:creation and content editing that are
Speaker:monotonous and repeatable and you know,
Speaker:don't really require a lot of feel
Speaker:to get them right. Right. Like switching between two speakers. It's a
Speaker:fairly simplistic concept. But are you,
Speaker:do you think AI will ever really replace human
Speaker:editing and editors or ever be, you know, take it to the
Speaker:level that it will be capable of making something that
Speaker:is artistic or, you know, has emotion
Speaker:to it, or is it really just, you know,
Speaker:factual content editing and
Speaker:sharing and you know, quality control more than,
Speaker:you know, character control, let's say? Yeah, so I
Speaker:have, I do have some opinions and that may be interesting to the audience.
Speaker:So one, I think there's a lot of misunderstanding of like how AI
Speaker:works and also even the people who, people who actually, like, do have an idea
Speaker:of, like what? Like, you know, there's no one's clear on what the future is
Speaker:going to be like. But I have a thesis of
Speaker:how AI is going to impact video editing. I think one of the
Speaker:things that you have to understand going off the boat is that there aren't really
Speaker:AI models that are trained to edit video. And
Speaker:I want to make sure and emphasize that point. There aren't really AI
Speaker:models that are in any deep or
Speaker:fundamental way trained to predict what edits you would make in
Speaker:video content. And that comes from the fact that these
Speaker:large language model labs like OpenAI and
Speaker:Google haven't actually sat down and hired hundreds of video
Speaker:editors to build the data sets. It goes back to
Speaker:the fundamental, like, they weren't built for this stuff. And that doesn't stop people
Speaker:like opus from using ChatGPT as a way of
Speaker:editing. But that's why the results are so mixed. If you
Speaker:use a tool like OPUS Clips or even like,
Speaker:you know, Riverside's clips, like, I don't think anyone would mistake those results
Speaker:for something that was created by a, you know, like a trained human editor.
Speaker:Like if, if you got like 30%, if, if, like a human editor gave
Speaker:you a clip that started in the middle of a sentence, you would say something's
Speaker:wrong with you. Right? But there's so many obviously wrong things,
Speaker:you know, with some of these clips. And I think that's why. So I think
Speaker:that people will start to address it. That's what I'm doing. But I really see
Speaker:as people kind of figure out, like, how to kind of actually
Speaker:get AI, not just to understand what's going on in a video, but also to
Speaker:decide what edits to make. I think you're going to see two
Speaker:distinct directions, and that's coming from my experience with software.
Speaker:So in software, you have tools that are being used to speed up
Speaker:software tasks so people who are programmers can
Speaker:now code faster because of these coding tools. And I think you're going to see
Speaker:the same thing with AI, sorry, editing tools. So those editing tools that blaze
Speaker:essentially have the equivalent of autocomplete or I think descript is probably
Speaker:the best example that I've seen of this so far, where they have, like, smart
Speaker:transitions and smart kind of layouts that'll predict
Speaker:what it is you're looking for and just kind of speed up that process. I
Speaker:think that's probably where most the most useful innovations
Speaker:are going to be in terms of AI editing. And in that sense, of all
Speaker:the companies that I've seen doing anything in this editing and creation space like
Speaker:Descript is probably way ahead of other companies on doing that.
Speaker:Like, I actually don't think that Opus is particularly interesting in
Speaker:that respect. And then what I'm trying to do, which is
Speaker:not build a tool for an editor, but rather kind of build something that's
Speaker:similar to Squarespace, where like someone who is not an editor
Speaker:and doesn't have the budget to hire an editor can still get something that's okay
Speaker:very quickly. And so in the sense that Squarespace
Speaker:lets you get a website without necessarily hiring a programmer or
Speaker:learn to program yourself, the idea was can you build
Speaker:AI that can get you something that is
Speaker:maybe not as good as what you get from a professional editor, but passable.
Speaker:Now, regarding the question of will you ever get something that'll
Speaker:reach the creative levels of an editor,
Speaker:I want to appeal to this meta sense of what is possible with AI.
Speaker:So just the high level benchmark is if you
Speaker:gave the same thing to 10 humans to do and
Speaker:they would all give you 10 different answers, then that's not a good kind
Speaker:of task to automate. And so if you're talking
Speaker:about like really fancy edits and you gave the same editing,
Speaker:you know, the same kind of like mandate to 10 different like high level
Speaker:editors, and if you got like very different
Speaker:responses back, that's probably not something you can edit. And that's why,
Speaker:you know, I struggle to see how you could create like edits of the level
Speaker:of like a Super bowl ad or like a Hollywood movie that's ever just
Speaker:generically edited by an AI. But I
Speaker:think the argument here is that a lot of more
Speaker:mundane kinds of content that people are making are not Hollywood edits. And
Speaker:the edits that you're making aren't like that creative. And so
Speaker:there's a lot of this kind of like mid level content for which
Speaker:there's an obvious like answer of like where does the recording start, where does it
Speaker:end? And those are tasks that are very, very much automatable.
Speaker:Because if it's like 10 people would all look at the same thing and say,
Speaker:yeah, the right thing to do is start here, start there, do this, do that,
Speaker:then you could imagine automating that.
Speaker:And the goal with what I was looking for is finding this subset
Speaker:of editing tasks that fit those cred that category of
Speaker:kind of things. So it's like, you know,
Speaker:I would never imagine an AI like just coming up with like a really great
Speaker:super bowl ad, but most people aren't creating super
Speaker:bowl ads, if that makes sense. That is very, very true.
Speaker:So, all right. We are chatting with Sam Bhattacharya. He
Speaker:is the CEO of Katana Video. You can learn more at
Speaker:Katana Video. We've also got a LinkedIn
Speaker:connection for Sam, so if you want to learn more about him and some of
Speaker:the other places he's worked and the things that he's up to, you can follow
Speaker:him there. Sam, before we let you go, we always like to ask folks a
Speaker:few questions about the space in general. Now, our show usually
Speaker:focuses more on podcasters. You're more of the content
Speaker:space where this isn't just limited to podcasters. But I'm still curious.
Speaker:Is there something else in the podcasting space where you would like
Speaker:to see improvement made or have somebody
Speaker:work on solving problems there?
Speaker:I don't know. I'm fundamentally, instead of prescriptive, I'm
Speaker:very descriptive in the sense that I don't like, imagine, like, this
Speaker:is how things should be done for how people are making content. I just accept
Speaker:that people are making content as how do you fix problems that
Speaker:exist? I see a lot of debate on, like,
Speaker:audio versus video, and at some point I kind
Speaker:of get that there's this mix and merge of media, and
Speaker:I see that there's a lot of people with opinions, and maybe this is just
Speaker:me coming with, you know, very little experience in the, in this
Speaker:industry up to date. So just let people make content like, you know,
Speaker:people. If there's like this mix between, like, show and podcast, like, I
Speaker:don't, I don't have strong opinions. Just, like, let people do what they want to
Speaker:do. You know, call it a podcast if you want. Don't call it a podcast
Speaker:if you don't want. Use the platforms you want to. I just, you know,
Speaker:I, I see opinions from people who are more
Speaker:experienced than I am in this space. And like, I, you know, it almost like
Speaker:I empathize in that I have my own crotchety opinions in the space of, like,
Speaker:programming and whatnot. But just, I don't, I don't get why people get so
Speaker:fussed about, like, you know, the direction of content giving. It's all kind of
Speaker:like mixing in this grab bag of, like, what does content even mean at this
Speaker:point? Yeah, if it's useful, if it's valuable, if somebody else enjoys it.
Speaker:Who cares what you call it? Just put it out there and let people access
Speaker:it. What about, is there any tech on
Speaker:your wish list, whether it's for content creation or for the,
Speaker:you know, editing process, something that either is out there that you want to get
Speaker:your hands on or something that has yet to be made that would be useful
Speaker:for you. Well, I mean, I, I'm kind of building the
Speaker:thing that I want to work, right? Like, so I think
Speaker:the, the. The thing that frustrates me most about tools
Speaker:like Opus is that it, it doesn't
Speaker:have, like, built in quality control. Right? Like, you'll give it
Speaker:a video and it'll give you back 30 clips, and you have
Speaker:to still go through and curate which ones are obviously
Speaker:good and obviously bad. And I kind of wish that you could have some kind
Speaker:of quality control where at this point, AI is smart enough, it should be
Speaker:smart enough. And that's what I'm working on to make sure that,
Speaker:okay, not Every show has 30 clips that are worth surfacing. So surface the
Speaker:ones that are actually with surfacing, even if it's not like 30, right. Like, if
Speaker:it's. If I only have like eight moments that are worth sharing, give me those
Speaker:eight moments, but make me make sure that those eight moments are actually like, you
Speaker:know, good or at least passable. Right? Like, I think we
Speaker:haven't even gotten past this basic filter of like, you know, like, there's
Speaker:like, artistic creativity. We can all disagree on, like, what constitutes good, but
Speaker:there's still a lot of things where people, the results are just obviously bad. And
Speaker:it's like, let's focus on filtering those out first,
Speaker:then we can have an argument on what's good. All right? And
Speaker:then lastly, are there any podcasts or. I'm going to expand
Speaker:this. Are there other content creators that you are following
Speaker:religiously that you want to talk about?
Speaker:I have come to realize that I have a very different information
Speaker:diet from a lot of people. I was at a podcasting conference called
Speaker:podfests earlier this year, and they mentioned four different shows and
Speaker:podcasts. And two of them, everyone raised their hand except me. And
Speaker:then the third, I was the only who raised my hand. Okay.
Speaker:Just some random stuff that I like. So one is,
Speaker:I follow a lot of AI stuff. So there's one podcaster called Dwarkesh
Speaker:Patel. He's, you know, some very smart CS
Speaker:guy that decided, I'm going to go into podcasting and interviews, like the CEO of
Speaker:Microsoft, and they're talking about the future of AI. And you listen to that
Speaker:stuff and it's just a very, very different kind of view of the world of
Speaker:like, assuming that the whole world is going to be automated. They're talking about, like,
Speaker:you. Are we going to have AI only companies, it's
Speaker:like. And then like the actual conversations from like real world people
Speaker:is very different. Or, you know, AI is just like a tool. Like,
Speaker:AI just means chat, GPT. I don't know, it's just very different information diets.
Speaker:And I'm sitting in the middle, I'm like, I don't know, like, just people have
Speaker:different information diets. That, that kind of feeds into their
Speaker:worldview, I guess, but just trying to make sense of that. But,
Speaker:you know, those are some of the things I like. I also just have like
Speaker:a hodgepodge of like, I like history. So I have some very random, like, history
Speaker:podcasts that I listen to, but it's all very nerdy, if that
Speaker:makes sense. That's okay. That's. I mean, I think that's part of what makes podcasts
Speaker:great, is that it allows people to really get as nerdy as they want to
Speaker:on a topic that interests them. And, you know, they're not just forced to consume
Speaker:what is available. So the nerdier the better.
Speaker:Yeah, exactly. You know, it's like some retired professor
Speaker:who has some time and has decided, you know, I'm going to do a podcast
Speaker:instead of doing lectures. Like, that's great and it's free and I love it.
Speaker:So thank you for making those podcasts. Even if it has, like,
Speaker:even if I'm one of the only, like 3K subscribers they have. I mean, 3K
Speaker:is not a, you know, number to sneeze at, but still, it's not like when
Speaker:I'm talking About like Darius E.O. kind of
Speaker:popularity, but there's, there's people who listen to those things and I'm one of those
Speaker:people. So. I'm sure
Speaker:the creators are happy to hear that. And we'll try to put links to all
Speaker:the ones that you did mention here in the show notes for anybody else who
Speaker:wants to check them out. Sam Bhattacharya, the CEO
Speaker:of Katana Video. Thank you for joining us.
Speaker:Thank you for inviting me. Thanks for joining us. Today
Speaker:on Podcasting Tech, there are links to all the hardware and
Speaker:software that help power our guest customers. Content and podcasting
Speaker:tech available in the show notes and on our website at
Speaker:podcastingtech. Com. You can also subscribe to the show on your
Speaker:favorite platform, connect with us on social media, and even leave a rating and review
Speaker:while you're there. Thanks and we'll see you next time on
Speaker:Podcasting Tech.