Home
Jobs

Posted:6 days ago| Platform: Linkedin logo

Apply

Work Mode

Remote

Job Type

Full Time

Job Description

Svitla Systems Inc. is looking for a Senior Data Engineer for a full-time position (40 hours per week) in India. Our client is a cloud platform for business spend management (BSM) that helps organizations manage their spending, procurement, invoicing, expenses, and supplier relationships. They provide a unified, cloud-based spend management platform that connects hundreds of organizations representing the Americas, EMEA, and APAC with millions of suppliers globally. The platform provides greater visibility into and control over how companies spend money. Small, medium, and large customers have used the platform to bring billions of dollars in cumulative spending under management. The company offers a comprehensive platform that helps organizations manage their spending, procurement, invoicing, expenses, and supplier relationships. Founded in 2006 and headquartered in San Mateo, California, they aim to streamline and optimize business processes, driving efficiency and cost savings.

Requirements

  • Experience with processing large workloads and complex code on Spark clusters.
  • Experience setting up monitoring for Spark clusters and driving optimization based on insights and findings.
  • Understanding designing and implementing scalable data warehouse solutions to support analytical and reporting needs.
  • Strong analytic skills related to working with unstructured datasets.
  • Understanding of building processes supporting data transformation, data structures, metadata, dependency, and workload management.
  • Knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores.
  • Knowledge of Python and Jupyter Notebooks.
  • Knowledge of big data tools like Spark, Kafka, etc.
  • Experience with relational SQL and NoSQL databases, including Postgres and Cassandra.
  • Experience with data pipeline and workflow management tools like Azkaban, Luigi, Airflow, etc.
  • Experience with AWS cloud services (EC2, EMR, RDS, and Redshift).
  • Willingness to work from an office at least 2 times per week.
Nice to haveKnowledge of stream-processing systems (Storm, Spark-Streaming).

Responsibilities

  • Optimize Spark clusters for cost, efficiency, and performance by implementing robust monitoring systems to identify bottlenecks using data and metrics. Provide actionable recommendations for continuous improvement.
  • Optimize the infrastructure required for extracting, transforming, and loading data from various data sources using SQL and AWS ‘big data’ technologies.
  • Work with data and analytics experts to strive for greater cost efficiencies in the data systems.
We offer
  • US and EU projects based on advanced technologies.
  • Competitive compensation based on skills and experience.
  • Annual performance appraisals.
  • Remote-friendly culture and no micromanagement.
  • Personalized learning program tailored to your interests and skill development.
  • Bonuses for article writing, public talks, other activities.
  • 15 PTO days, 10 national holidays.
  • Free webinars, meetups and conferences organized by Svitla.
  • Fun corporate celebrations and activities.
  • Awesome team, friendly and supportive community!

Mock Interview

Practice Video Interview with JobPe AI

Start Technical Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Python Skills

Practice Python coding challenges to boost your skills

Start Practicing Python Now

RecommendedJobs for You