Data Engineer (Web Scraping & Data Ingestion)
Not sure if you're a good fit?
Upload your resume and TixelJobs AI will compare it against Data Engineer (Web Scraping & Data Ingestion) at Alertmedia. Get a match score, missing keywords, and improvement tips before you apply.
Free preview · Your resume stays private
About the Role
Do work that matters.
At AlertMedia, everything we do supports our mission: To save lives and minimize loss by identifying active threats globally and facilitating timely communications when an emergency threatens personal safety and business continuity.
Our core values drive us in our important mission of keeping people safe & informed:
- We’re humans not robots
- Customers always come first
- We work better together
- Simplicity is our strength
- Our reputation is priceless
- Hard work pays off
As one of the fastest growing software companies in the nation, we’re focused on finding the best talent and building the best team to continue accelerating our rapid growth to keep up with our demand.
AlertMedia is looking for an experienced Data Engineer with a strong background in social media web scraping, data ingestion, and large-scale data pipelines. This role is a hands-on opportunity to lead the design, maintenance, optimization, and expansion of pipelines that ingest data from open-source data sources and social media platforms at scale.
You’ll play a critical role in how AlertMedia discovers, processes, and delivers real-time signals to customers. We’re looking for someone with demonstrated experience in production-grade scraping and data engineering, who is comfortable owning complex systems end to end and improving them over time.
We want you to shine by showing us real work you’re proud of - systems you’ve built, pipelines you’ve scaled, and hard problems you’ve solved. We don’t do trick questions or whiteboard exercises.
Who you are:
You are a pragmatic data engineer who thrives in a collaborative environment and enjoys working on systems that operate at scale in the real world. You have significant experience building and maintaining web scraping and data ingestion pipelines, and you understand the challenges of reliability, data quality, rate limits, schema drift, and adversarial environments.
You care deeply about:
- Building resilient, observable, and scalable data systems
- Writing clean, maintainable code
- Data correctness and pipeline reliability
- Collaborating across engineering, product, and intelligence teams
You are comfortable working primarily in Python, have experience with cloud infrastructure (AWS), and are eager to continuously improve both systems and processes.
What you get to every day:
- Lead and own social media web scraping and data ingestion pipelines sourcing data from open-source datasets and social media platforms
- Design, build, and maintain scalable ETL and data processing pipelines (batch and/or streaming)
- Optimize existing scraping infrastructure for performance, reliability, and cost
- Handle challenges such as anti-bot protections, rate limiting, s
Ready to apply?
This job is active. Apply now to get in early.