Svitla Systems Inc. is looking for an experienced L3 Developer Support Engineer for a full-time position (40 hours per week) in India. Our client is a trusted ERP provider for over 100 companies in the oil & gas industry, offering a unified cloud-based platform for upstream and midstream operations. Their integrated solution eliminates data silos, reduces operational costs, and accelerates business performance. The platform covers everything from accounting and data migration to field operations and transportation — all designed to streamline energy workflows. This role goes beyond traditional support — you will actively contribute to the codebase by writing and deploying hotfixes, working with APIs, databases, and AWS services, and collaborating closely with engineering and product teams. Strong technical skills and hands-on development experience are essential to resolve complex production issues in a fast-paced environment. Please Note Night shift: Willingness to work from 8:00 AM to 5:00 PM CST (6:30 PM – 3:30 AM IST) is required. You should be fully available during working hours; simultaneous full-time or on-site commitments may not be compatible. Background verification may apply. Requirements 5+ years of experience working as a developer support engineer on the L3 tier. Strong proficiency in at least one programming language — Ruby, Golang, Python, React, or JavaScript — with hands-on experience writing, debugging, and maintaining production-level code. Ability to actively develop and deploy hotfixes, patches, and performance improvements as part of support duties. Understanding of REST APIs, HTTP protocols, and JSON. Familiarity with AWS services. Experience with monitoring tools (e.g., Prometheus, Grafana, AWS CloudWatch, Datadog). Strong problem-solving and debugging skills (logs, APIs, DB queries). Experience with SQL and database troubleshooting. Experience working in Linux/Unix-based environments. Proven experience in conducting root cause analysis (RCA) and resolving production issues. Familiarity with frontend or backend technologies to better understand the codebase. Familiarity with support tools (e.g., Jira) and processes for issue tracking and maintaining Service Level Agreements (SLAs). Excellent communication skills to effectively interact with customers and internal teams. Ability to work independently and resolve production issues in high-pressure environments. Responsibilities Provide Level 3 support for the company platform, including diagnosing, troubleshooting, and actively fixing production issues by writing and deploying code fixes, hotfixes, or patches. Diagnose, troubleshoot, and resolve production issues, ensuring swift resolution to minimize customer impact. Conduct root cause analysis (RCA) for recurring issues and implement permanent fixes, including code-level improvements. Maintain and troubleshoot MS SQL Server databases, ensuring data integrity, availability, and performance. Collaborate with Level 1 and 2 support teams to efficiently escalate and resolve issues. Document fixes, enhancements, and issue resolutions to facilitate knowledge sharing and future reference. Assist in releasing hotfixes or patches in coordination with the development team. Ensure Service Level Agreements (SLAs) comply with response times and issue resolution. Share feedback with product and engineering teams regarding product supportability and customer pain points. We offer US and EU projects based on advanced technologies. Competitive compensation based on skills and experience. Annual performance appraisals. Remote-friendly culture and no micromanagement. Personalized learning program tailored to your interests and skill development. Bonuses for article writing, public talks, other activities. 15 PTO days, 10 national holidays. Free webinars, meetups and conferences organized by Svitla. Fun corporate celebrations and activities. Awesome team, friendly and supportive community!
Svitla Systems Inc. is looking for a Lead DevOps Engineer for a full-time position (40 hours per week) in India. Our client is a leading expert network that provides business and government professionals with opportunities to communicate with industry and subject-matter experts to answer research questions. Customers consult with these experts over the phone and in person at conferences, teleconferences, custom events, and workshops. They may also gather their primary research data through surveys, polls, or web-based offerings. Experts are categorized into six main industry sectors: healthcare, financial and business services, consumer goods and services, energy, industrials, basic materials, technology, media, and telecom, as well as legal and regulatory. Since 2003, the company has provided its customers with primary research services, helping professionals comprehensively understand a topic before making significant investments and/or business decisions. Their multinational customer list includes nine top 10 consulting firms, hundreds of hedge funds, and many of the largest private equity firms and Fortune-ranked companies. As a Lead of DevOps & Cloud Engineering, you will be a core member of the engineering leadership team. In this role, you will manage and lead a team of Sys. Admin and DevOps engineers. You will be responsible for developing and executing a Development Operations (DevOps) strategy to ensure quality software deployments and overall application health and performance. Requirements 8-10+ years of experience working as a Senior DevOps Engineer. 3+ years of experience managing a team of DevOps/DevSecOps Engineers. Hands-on experience with Linux (CentOS), PHP (configs/modules), Apache, Nginx, Solr, Elasticsearch clusters, Redis, Couchbase, SQL Server, and RabbitMQ. Well-versed in Git version control and Artifactory. Problem-solving skills and a proactive work style. Strong interpersonal communication skills and proven practical leadership skills. Independent contributor with drive. Collaborative and teamwork-oriented. The ability to take ownership of tasks, manage projects, and uphold quality on a deadline. Strategic thinker who exhibits judgment when it comes to prioritization and overall team management. Nice to have Solid understanding and experience of IaaS and PaaS Cloud services (Azure preferred). Knowledge of Datadog with infrastructure and application monitoring and observability tools. Knowledge of containerization technology (Docker, Kubernetes, ASR). Solid understanding and hands-on experience with CI/CD tools: – Terraform, Ansible; – Azure DevOps pipelines; – Argo CD, KEDA, and Velero. Knowledge of Azure IaaS/PaaS: – Compute: VMs, Scale Sets; – Networking: VNet, Application Gateway, Load balancers, NSG, ASG, Firewalls; – Storage: Storage Accounts, Virtual Disks, Blob Storage; – PaaS: WebApps, WebJobs, Service Bus, Event Grid, etc. Responsibilities Complete ownership of DevOps/DevSecOps pipelines, supporting enterprise production, development, and test deployments. Manage CI/CD pipelines using Azure DevOps, Argo CD, Keda, and Velero for backups and DR. Lead and manage a team of experienced DevOps engineers in the global team. Provide, install, configure, optimize, maintain, and support all components of the web application stack, including Linux, Apache, SQL, PHP, Solr, Elasticsearch, Redis, and Couchbase. Provide the right size and optimize infrastructure to balance performance and cost for AKS and Azure VMs. Secure all infrastructure by applying patches and updates on a regular basis. Configure and manage systems to monitor and alert on infrastructure and application issues. Provide production support by triaging and troubleshooting. Contribute to overall engineering initiatives as a member of the engineering leadership team. We offer US and EU projects based on advanced technologies. Competitive compensation based on skills and experience. Annual performance appraisals. Remote-friendly culture and no micromanagement. Personalized learning program tailored to your interests and skill development. Bonuses for article writing, public talks, other activities. 15 PTO days, 10 national holidays. Free webinars, meetups and conferences organized by Svitla. Fun corporate celebrations and activities. Awesome team, friendly and supportive community!
Svitla Systems Inc. is looking for a Senior Data Engineer for a full-time position (40 hours per week) in India. Our client is a cloud platform for business spend management (BSM) that helps organizations manage their spending, procurement, invoicing, expenses, and supplier relationships. They provide a unified, cloud-based spend management platform that connects hundreds of organizations representing the Americas, EMEA, and APAC with millions of suppliers globally. The platform provides greater visibility into and control over how companies spend money. Small, medium, and large customers have used the platform to bring billions of dollars in cumulative spending under management. The company offers a comprehensive platform that helps organizations manage their spending, procurement, invoicing, expenses, and supplier relationships. Founded in 2006 and headquartered in San Mateo, California, they aim to streamline and optimize business processes, driving efficiency and cost savings. Requirements Experience with processing large workloads and complex code on Spark clusters. Experience setting up monitoring for Spark clusters and driving optimization based on insights and findings. Understanding designing and implementing scalable data warehouse solutions to support analytical and reporting needs. Strong analytic skills related to working with unstructured datasets. Understanding of building processes supporting data transformation, data structures, metadata, dependency, and workload management. Knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores. Knowledge of Python and Jupyter Notebooks. Knowledge of big data tools like Spark, Kafka, etc. Experience with relational SQL and NoSQL databases, including Postgres and Cassandra. Experience with data pipeline and workflow management tools like Azkaban, Luigi, Airflow, etc. Experience with AWS cloud services (EC2, EMR, RDS, and Redshift). Willingness to work from an office at least 2 times per week. Nice to have Knowledge of stream-processing systems (Storm, Spark-Streaming). Responsibilities Optimize Spark clusters for cost, efficiency, and performance by implementing robust monitoring systems to identify bottlenecks using data and metrics. Provide actionable recommendations for continuous improvement. Optimize the infrastructure required for extracting, transforming, and loading data from various data sources using SQL and AWS ‘big data’ technologies. Work with data and analytics experts to strive for greater cost efficiencies in the data systems. We offer US and EU projects based on advanced technologies. Competitive compensation based on skills and experience. Annual performance appraisals. Remote-friendly culture and no micromanagement. Personalized learning program tailored to your interests and skill development. Bonuses for article writing, public talks, other activities. 15 PTO days, 10 national holidays. Free webinars, meetups and conferences organized by Svitla. Fun corporate celebrations and activities. Awesome team, friendly and supportive community!
Svitla Systems Inc. is looking for a Senior Software Engineer with a Data Background for a full-time position (40 hours per week) in India. Our client is a cloud-based Business Spend Management (BSM) platform that helps organizations manage procurement, invoicing, expenses, and supplier relationships. Their unified solution provides visibility and control over how companies spend money, connecting hundreds of organizations across the Americas, EMEA, and APAC with millions of suppliers worldwide. Founded in 2006 and headquartered in San Mateo, California, the company leverages AI trained on trillions of dollars in spend data across a global network of over 10 million buyers and suppliers. This empowers businesses to predict, optimize, and automate smarter decisions that drive operational efficiency and improve margins. Role Overview You will join the development team (IST working hours, team, and TL are located in India) that builds and maintains one of the company’s software products. This product is a multi-tenant SaaS solution running on major cloud platforms like AWS, Azure, and GCP. We expect you to be a strong leader with extensive technical experience and a solid analytical approach to solving problems. You focus on finding effective solutions and take ownership and responsibility seriously. Requirements Bachelor’s degree in computer science, information systems, computer engineering, systems analysis, or a related discipline, or equivalent work experience. 4 to 8 years of experience building enterprise, SaaS web applications. Strong knowledge of Python and Django Experience with relational SQL Understanding of microservices and event-driven architecture Strong knowledge of APIs and integration with the backend Experience in cloud (AWS is preferred) Experience with CI/CD Tooling and software delivery and bundling mechanisms Experience in full-stack web development with hands-on experience building responsive UI, Single Page Applications, reusable components, with a keen eye for UI design and usability. Nice to have Familiarity with AI/ML-based data cleansing, deduplication and entity resolution techniques Experience with Kafka or other pub-sub mechanisms Experience with Redis or other caching mechanisms Previous experience in additional programming languages and in one or more of the following modern frameworks/technologies: Ruby, Java, .NET, C, etc Expertise in Performance Optimization and Monitoring Tools. Responsibilities Implement a cloud-native analytics platform with high performance and scalability Build an API-first infrastructure for data in and data out Build data ingestion capabilities for client data, as well as external spend data Leverage data classification AI algorithms to cleanse and harmonize data Own data modelling, microservice orchestration, monitoring & alerting Build solid expertise in the entire client application suite and leverage this knowledge to better design application and data frameworks. Adhere to iterative development processes to deliver concrete value each release while driving longer-term technical vision. Engage with cross-organizational teams such as Product Management, Integrations, Services, Support, and Operations, to ensure the success of overall software development, implementation, and deployment. We offer US and EU projects based on advanced technologies. Competitive compensation based on skills and experience. Annual performance appraisals. Remote-friendly culture and no micromanagement. Personalized learning program tailored to your interests and skill development. Bonuses for article writing, public talks, other activities. 15 PTO days, 10 national holidays. Free webinars, meetups and conferences organized by Svitla. Fun corporate celebrations and activities. Awesome team, friendly and supportive community!
Svitla Systems Inc. is looking for a Lead Software Engineer (Python and Data Background) for a full-time position (40 hours per week) in India. Our client is a cloud platform for business spend management (BSM) that helps organizations manage their spending, procurement, invoicing, expenses, and supplier relationships. They provide a unified, cloud-based spend management platform that connects hundreds of organizations representing the Americas, EMEA, and APAC with millions of suppliers globally. The platform provides greater visibility into and control over how companies spend money. Small, medium, and large customers have used the platform to bring billions of dollars in cumulative spending under management. The company offers a comprehensive platform that helps organizations manage their spending, procurement, invoicing, expenses, and supplier relationships. The customer makes margins multiply through its community-generated AI and industry-leading total spend management platform for businesses large and small. They are informed by trillions of dollars of direct and indirect spend data across a global network of 10M+ buyers and suppliers. Founded in 2006 and headquartered in San Mateo, California, they aim to streamline and optimize business processes, driving efficiency and cost savings. As a member of the development group, you will become part of a team that develops and maintains one of client’s software products, built as a multi-tenant SaaS solution on all Cloud Platforms like AWS, Windows Azure & GCP. We expect that you are a strong leader with extensive technical experience. You have a well-founded analytical approach to finding good solutions, a strong sense of responsibility, and excellent skills in communication and planning. You are proactive in your approach and a strong team player. Requirements Bachelor’s degree in computer science, information systems, computer engineering, systems analysis or a related discipline, or equivalent work experience. 7-10 years of experience building enterprise, SaaS web applications. Strong knowledge of Python and Django. Experience with relational SQL. Understanding of microservices and event-driven architecture. Strong knowledge of APIs and integration with the backend. Cloud expertise (AWS is preferred), including expertise in Performance Optimization and Monitoring Tools. Experience with CI/CD Tooling and software delivery and bundling mechanisms Previous experience in additional programming languages and in one or more of the following modern frameworks/technologies: Ruby, Java, .NET, C, etc. Experience in full-stack web development with hands-on experience building responsive UI, Single Page Applications, reusable components, with a keen eye for UI design and usability. Nice to have Familiarity with AI/ML-based data cleansing, deduplication and entity resolution techniques. Experience with Kafka or other pub-sub mechanisms. Experience with Redis or other caching mechanisms. Experience with NoSQL databases such MySQL /PostgreSQL / AWS Aurora / Cassandra. Responsibilities Implement a cloud-native analytics platform with high performance and scalability Build an API-first infrastructure for data in and data out Build data ingestion capabilities for client’s data, as well as external spend data Leverage data classification AI algorithms to cleanse and harmonize data Own data modelling, microservice orchestration, monitoring & alerting Build solid expertise in the entire client’s application suite and leverage this knowledge to better design application and data frameworks. Adhere to iterative development processes to deliver concrete value each release while driving longer-term technical vision. Engage with cross-organizational teams such as Product Management, Integrations, Services, Support, and Operations, to ensure the success of overall software development, implementation, and deployment. We offer US and EU projects based on advanced technologies. Competitive compensation based on skills and experience. Annual performance appraisals. Remote-friendly culture and no micromanagement. Personalized learning program tailored to your interests and skill development. Bonuses for article writing, public talks, other activities. 15 PTO days, 10 national holidays. Free webinars, meetups and conferences organized by Svitla. Fun corporate celebrations and activities. Awesome team, friendly and supportive community!
Svitla Systems Inc. is looking for a Software Engineer with Data Background for a full-time position (40 hours per week) in India. Our client is a cloud platform for business spend management (BSM) that helps organizations manage their spending, procurement, invoicing, expenses, and supplier relationships. They provide a unified, cloud-based spend management platform that connects hundreds of organizations representing the Americas, EMEA, and APAC with millions of suppliers globally. The platform provides greater visibility into and control over how companies spend money. Small, medium, and large customers have used the platform to bring billions of dollars in cumulative spending under management. The company offers a comprehensive platform that helps organizations manage their spending, procurement, invoicing, expenses, and supplier relationships. Founded in 2006 and headquartered in San Mateo, California, they aim to streamline and optimize business processes, driving efficiency and cost savings. Requirements Bachelor’s degree in computer science, information systems, computer engineering, systems analysis or a related discipline, or equivalent work experience. 3 to 5 years of experience building enterprise, SaaS web applications. Strong knowledge of Python and Django Experience with relational SQL Understanding of microservices and event-driven architecture Strong knowledge of APIs and integration with the backend Experience with CI/CD Tooling and software delivery and bundling mechanisms Nice to have Familiarity with AI/ML-based data cleansing, deduplication and entity resolution techniques Experience with Kafka or other pub-sub mechanisms Experience with Redis or other caching mechanisms Previous experience in additional programming languages and in one or more of the following modern frameworks/technologies: Ruby, Java, .NET, C, etc Experience in full-stack web development with hands-on experience building responsive UI, Single Page Applications, reusable components, with a keen eye for UI design and usability. Experience in cloud (AWS is preferred) Expertise in Performance Optimization and Monitoring Tools. Responsibilities Implement a cloud-native analytics platform with high performance and scalability Build an API-first infrastructure for data in and data out Build data ingestion capabilities for Coupa data, as well as external spend data Leverage data classification AI algorithms to cleanse and harmonize data Own data modelling, microservice orchestration, monitoring & alerting Build solid expertise in the entire Coupa application suite and leverage this knowledge to better design application and data frameworks. Adhere to iterative development processes to deliver concrete value each release while driving longer-term technical vision. Engage with cross-organizational teams such as Product Management, Integrations, Services, Support, and Operations, to ensure the success of overall software development, implementation, and deployment. We offer US and EU projects based on advanced technologies. Competitive compensation based on skills and experience. Annual performance appraisals. Remote-friendly culture and no micromanagement. Personalized learning program tailored to your interests and skill development. Bonuses for article writing, public talks, other activities. 15 PTO days, 10 national holidays. Free webinars, meetups and conferences organized by Svitla. Fun corporate celebrations and activities. Awesome team, friendly and supportive community!
Company Description Svitla Systems, Inc. is a global digital solutions company with over 20 years of experience, crafting more than 5,000 transformative solutions for clients worldwide. Our mission is to leverage digital, cloud, data, and intelligent technologies to create sustainable solutions for our clients, enhancing their growth and competitive edge. With a diverse team of over 1,000 technology professionals, Svitla serves a range of clients from innovative startups to Fortune 500 companies across 20+ industries. Svitla operates from 10 delivery centers globally, specializing in areas like cloud migration, data analytics, web and mobile development, and more. We are proud to be a WBENC-certified business and one of the largest, fastest-growing women-owned IT companies in the US. Role Description This is a fully remote, full-time, long-term contractual position with one of our clients who is building the next generation of secure, real-time proctoring solutions for high-stakes exams. We’re looking for a Senior ML/AI Engineer to architect, implement, and maintain Azure-based AI models that power speech-to-text, computer vision, identity verification, and intelligent chat features during exam sessions. Responsibilities - Implement real-time speech-to-text transcription and audio-quality analysis using Azure AI Speech. - Build prohibited-item detection, OCR, and face-analysis pipelines with Azure AI Vision. - Integrate Azure Bot Service for rule-based, intelligent chat support. - Collaborate with our DevOps Engineer on CI/CD and infrastructure-as-code for AI model deployment. - Train, evaluate, and deploy object-detection models (e.g., screen-reflection, background faces, ID checks) using Azure Custom Vision. - Develop and maintain skeletal-tracking models (OpenPose/MediaPipe) for gaze-anomaly detection. - Fine-tune Azure Face API for ID-to-headshot matching at session start and continuous identity validation. - Expose inference results via REST APIs in partnership with backend developers to drive real-time proctor dashboards and post-session reports. - Monitor model performance, manage versioning/retraining workflows, and optimize accuracy for edge-case scenarios. Qualifications - Bachelor’s or Master’s degree in Computer Science, Electrical Engineering, or related field. - 5+ years of professional ML/AI experience, with at least 2 years working on production-grade Azure Cognitive Services. - Strong Python skills plus 3+ years with TensorFlow or PyTorch. - Hands-on experience (1–2 years) with: - Azure AI Speech (speech-to-text, audio analysis) - Azure AI Vision (object detection, OCR, face analysis) - Azure Custom Vision model training and deployment - Azure Face API fine-tuning and biometric matching - Azure Bot Service integration - Solid understanding of CI/CD practices and tools (Azure DevOps, Docker, Kubernetes) with 2+ years of collaboration on AI model deployments. - 2+ years building and consuming RESTful or gRPC APIs for AI inference. - Proven track record of monitoring and optimizing model performance in production. Good to haves - 1+ year with skeletal-tracking frameworks (OpenPose, MediaPipe). - Familiarity with Azure ML Studio, ML pipelines, or MLflow for model versioning and retraining. - Experience with edge-deployment frameworks (TensorFlow Lite, ONNX Runtime). - Background in security and compliance for biometric data (GDPR, PCI-DSS). - Azure AI Engineer Associate or Azure Data Scientist Associate certification. Additional Information -The role is a Fully Remote, Full-Time, Long-Term, Contractual position -The hiring process includes an initial screening by the recruitment team, an HR motivation interview, an internal tech screening, a client technical interview and finally the client management interview -The salary range for this position is 50-70 LPA (INR) -The position needs to be filled on priority, only candidates who have an official notice period (or remaining time on their notice period) of <=30 days will be screened
Svitla Systems Inc. is looking for a Senior ML/AI Engineer for a full-time position (40 hours per week) in India. Our client is a leading online training provider. The company offers 7,000+ regulated and non-regulated courses for career development and has 11+ million learners in 67 countries. The robust e-learning platform supports multimedia content, quizzes, and assessments. Courses are accessible on various devices, including smartphones and tablets. There are regular updates to course content to reflect the latest industry standards and regulations. It is a reliable and reputable company that provides various online training and certification programs across multiple industries, such as real estate, food and beverage, environmental health and safety, and more. Established in 1997 and headquartered in Austin, Texas, USA, they have earned accreditation from esteemed organizations and regulatory bodies like the International Accreditors for Continuing Education and Training (IACET), ANSI National Accreditation Board (ANAB), and the Texas Real Estate Commission (TREC), among others Project Overview Domain: AI-powered online proctoring for remote examinations Goal: Develop a high-availability, scalable platform to replace an outdated legacy system Team: Cross-functional Agile team including AI/ML Engineer, Solution Architect, Backend Developer, and DevOps Engineer Collaboration: Work closely with technical stakeholders to integrate AI capabilities into a secure and compliant platform Key Responsibilities: Identity verification, Gaze tracking, Environment validation, Speech analysis, Behavioral anomaly detection; Design, implement, and optimize AI services and models Technology Stack Programming Language: Python ML Frameworks: PyTorch, TensorFlow, Scikit-learn Cloud Platform: Microsoft Azure Azure Services: Azure AI Speech, Azure Vision, Azure Custom Vision, Azure Machine Learning Studio, Azure Face API, Azure Bot Service, Azure Cognitive Services Requirements 4+ years of experience in ML/AI engineering, preferably in Computer Vision, Speech Processing, or Biometric Systems. Practical experience with Azure (preferably) or other cloud solutions for AI Speech, Vision & Face API. Proficiency in Python and popular ML frameworks (e.g., PyTorch, TensorFlow, Scikit-learn, Anomalib) Familiar with video processing models (Video Swin Transformer, X3D), facial landmarks detection frameworks (MediaPipe, MMPose), and speech technologies (STT, Speaker Diarization). Experience with Azure cloud is a huge plus, but AWS will also work Understanding of the risks and ethics involved in facial recognition and behavioral tracking Strong problem-solving and debugging skills, with a proactive mindset Advanced English (written and verbal) for daily communication Working Hours: 8 hours per day, flexible schedule Responsibilities Use Azure AI Speech for real-time speech-to-text transcription and audio quality analysis during exam sessions. Apply Azure AI Vision to enable prohibited item detection, OCR, and face analysis. Support Azure Bot Service integration for intelligent, rule-based chat communication. Work with the DevOps Engineer to streamline AI model deployment and infrastructure setup. Train, evaluate, and deploy object detection models using Azure Custom Vision (e.g., screen reflection, background faces, ID inspection). Maintain skeletal tracking models for analyzing head and eye movement, enabling gaze anomaly detection and timeline flagging. Fine-tune Face API to perform ID-to-headshot matching at session start and enable continuous identity validation throughout the exam. Collaborate with backend developers to deliver AI inference results via APIs, powering real-time proctor dashboards and session reports. Monitor model performance, manage versioning and retraining workflows, and optimize for accuracy and reliability in edge cases. We offer US and EU projects based on advanced technologies. Competitive compensation based on skills and experience. Annual performance appraisals. Remote-friendly culture and no micromanagement. Personalized learning program tailored to your interests and skill development. Bonuses for article writing, public talks, other activities. 15 PTO days, 10 national holidays. Free webinars, meetups and conferences organized by Svitla. Fun corporate celebrations and activities. Awesome team, friendly and supportive community!
Svitla Systems Inc. is looking for a Data Engineer (with ML/AI Experience) for a full-time position (40 hours per week) in India . Our client is the world’s largest travel guidance platform, helping hundreds of millions each month become better travelers, from planning to booking to taking a trip. Travelers across the globe use the site and app to discover where to stay, what to do, and where to eat based on guidance from those who have been there before. With more than 1 billion reviews and opinions from nearly 8 million businesses, travelers turn to clients to find deals on accommodations, book experiences, and reserve tables at delicious restaurants. They discover great places nearby as a travel guide company, available in 43 markets and 22 languages. As a member of the Data Platform Enterprise Services Team, you will collaborate with engineering and business stakeholders to build, optimize, maintain, and secure the full data vertical, including tracking instrumentation, information architecture, ETL pipelines, and tooling that provide key analytics insights for business-critical decisions at the highest levels of product, finance, sales, CRM, marketing, data science, and more, all in a dynamic environment of continuously modernizing tech stack including highly scalable architecture, cloud-based infrastructure, and real-time responsiveness. Requirements BS/MS in Computer Science or related field. 4+ years of experience in data engineering or software development. Experience with AI models, LLM. Proven data design and modeling with large datasets (star/snowflake schema, SCDs, etc.). Strong SQL skills and ability to query large datasets. Experience with modern cloud data warehouses: Snowflake, BigQuery, etc. ETL development experience: SLA, performance, and monitoring. Familiarity with BI tools and semantic layer principles (e.g., Looker, Tableau). Understanding of CI/CD, testing, documentation practices. Comfortable in a fast-paced, dynamic environment. Ability to collaborate cross-functionally and communicate with technical/non-technical peers. Strong data investigation and problem-solving abilities. Comfortable with ambiguity and focused on clean, maintainable data architecture. Detail-oriented with a strong sense of ownership. Nice to Have Experience with data governance, data transformation tools. Prior work with e-commerce platforms. Experience with Airflow, Dagster, Monte Carlo, or Knowledge Graphs. Responsibilities Collaborate with stakeholders from multiple teams to collect business requirements and translate them into technical data model solutions. Design, build, and maintain efficient, scalable, and reusable data models in cloud data warehouses (e.g., Snowflake, BigQuery). Transform data from many sources into clean, curated, standardized, and trustworthy data products. Build data pipelines and ETL processes handling terabytes of data. Analyze data using SQL and dashboards; ensure models align with business needs. Ensure data quality through testing, observability tools, and monitoring. Troubleshoot complex data issues, validate assumptions, and trace anomalies. Participate in code reviews and help improve data development standards. We offer US and EU projects based on advanced technologies. Competitive compensation based on skills and experience. Annual performance appraisals. Remote-friendly culture and no micromanagement. Personalized learning program tailored to your interests and skill development. Bonuses for article writing, public talks, other activities. 15 PTO days, 10 national holidays. Free webinars, meetups and conferences organized by Svitla. Fun corporate celebrations and activities. Awesome team, friendly and supportive community!
Svitla Systems Inc. is looking for a Senior Dynamics 365 Integrator for a full-time position (40 hours per week) in India. Our client is a leading national managed security and IT solution provider that helps organizations secure and support their businesses today, solve for tomorrow, and strategize for the future with cyber-first solutions. The experts, including specialists in cybersecurity, engineering, networking, and cloud computing, collaborate with customers to implement solutions that protect their assets, reduce risk, and optimize performance end-to-end. Since 1999, they have delivered information technology consulting and outsourcing services to serve small- and medium-sized businesses. The company’s managed IT services, IT strategy, and cloud services enable corporations, healthcare organizations, non-profits, and public sector entities to rapidly, efficiently, and cost-effectively achieve their business objectives and realize their potential. The project is migrating from on-premises to model-driven Power Apps. It’s a 3-month temporary contract (with the possibility of prolongation) to help the client catch up on the project. Requirements Proven experience in implementing and managing Dynamics 365 solutions. Strong understanding of model-driven Power Apps and customization techniques. Experience migrating on-premises Dynamics 365 to Azure’s Power Platform. Experience with Power Platform. Knowledge of SQL. Knowledge of C# and JavaScript. Familiarity with Azure. Familiarity with business analysis. Understanding of system integration and data management. Experience in project management and delivery. Outstanding problem-solving and analytical skills. Exceptional skills in communication and stakeholder management. Responsibilities Implement, configure, and customize Dynamics 365 applications to meet business requirements. Develop model-driven Power Apps to enhance business processes and improve user experience. Ensure seamless integration of Dynamics 365 with other enterprise systems. Develop and enforce best practices for system usage and data management. Conduct training sessions for end-users and create documentation. Execute enhancements to improve functionality. We offer US and EU projects based on advanced technologies. Competitive compensation based on skills and experience. Flexibility in a workspace, either remote or in one of our development offices. Bonuses for recommendations of new employees. Bonuses for article writing, public talks, and other activities. Free webinars, meetups, and conferences organized by Svitla. Corporate celebrations and activities, regular lectures on various topics. Awesome team, friendly and supportive community!
Job Title: Sr Data/AI/ML Engineer Location: Remote/Pune Skill set: Azure, Data Eng., Python Your Day-to-Day responsibilities include: Design and development of systems for the maintenance of the Azure Databricks, ETL processes, business intelligence and data ingestion pipelines for AI/ML use cases. Build, scale, and optimize GenAI and ML workloads across Databricks and other production environments, with strong attention to cost efficiency, compliance, and robustness Build ML pipelines to train, serve, and monitor reinforcement learning or supervised learning models using Databricks and ML Flow. Create and support ETL pipelines and tables schemas in order to facilitate the accommodation of new and existing data sources for the Lakehouse on Databricks. Maintain data governance and data privacy standards. Collaborate with data architects, data scientists, analysts and other business consumers to quickly and thoroughly analyze business requirements to populate the data warehouse, optimized for reporting and analytics. Perform root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement. Maintain technical documentation and mentor junior data engineers on best practices in data engineering and Lakehouse architecture. Drive innovation and contribute to the development of cutting-edge Generative AI and analytical capabilities for the client's Next-Gen research enablement platform. Minimum Qualifications✔️: 7+ years of related experience with a bachelor’s degree Proven experience designing and deploying applications using Generative AI and large language models (e.g., GPT4, Claude, openweight LLMs) Experience with retrieval augmented generation, embeddings-based search, agent orchestration, or prompt chaining Familiarity with modern LLM/GenAI tools such as Langchain, LlamaIndex, HuggingFace Transformers, Semantic Kernel, or LangGraph. Advanced working SQL knowledge and experience working with relational and NoSQL databases, query authoring (SQL) as well as working familiarity with a variety of databases (e.g. SQL Server) Experience building and optimizing data pipelines on Azure Databricks. In depth knowledge and hands-on experience with Data Engineering, Machine learning, Data Warehousing and Delta Lake on Databricks. Highly proficient in Spark, Python, SQL Working knowledge of Fivetran is a bonus. A successful history of manipulating, processing and extracting value from large, disconnected datasets. Exceptional stakeholder management and communication skills to effectively communicate across global teams. Preferred Qualifications✔️: Knowledge of BI Tools like PowerBI etc. Experience building and deploying ML and feature engineering pipelines to production using MLFlow. Experience with building data pipeline from various business applications like Salesforce, NetSuite, etc. Knowledge of message queuing, stream processing, and highly scalable data stores. Experience or knowledge of working in a compliance-based environment, including building and deploying compliant software solutions throughout the software life cycle is a nice to have. Familiarity with cloud-based AI/ML services and Generative AI tools.
As an Odoo ERP Developer at Svitla Systems Inc., you will be responsible for contributing to the development and maintenance of Odoo modules to meet client requirements. Your role will involve implementing new features, writing clean and reusable code, debugging technical issues, and optimizing the performance of existing systems. You will collaborate with cross-functional teams, participate in code reviews, and stay updated with the latest Odoo releases and development trends. **Key Responsibilities:** - Implement new features based on tickets and business requirements. - Develop, customize, and maintain Odoo modules to meet client needs. - Write clean, reusable, and well-documented code. - Debug and resolve technical issues in existing applications. - Collaborate with project managers, designers, and QA teams. - Optimize the performance of existing Odoo systems. - Participate in code reviews and knowledge-sharing sessions. **Qualifications Required:** - 3+ years of experience developing Odoo ERP (both frontend and backend). - Knowledge of JavaScript, Python, HTML, CSS, SCSS, Bootstrap, and XML. - Good understanding of the Odoo framework, modules, and customization. - Knowledge of software development best practices and version control systems (e.g., Git). - Strong problem-solving skills and attention to detail. - Ability to work collaboratively in a team environment and communicate effectively.,
Job Title: Sr Data AI/ML Engineer Location: Remote Skillset: Azure, Data Eng., Python Job Description: Responsibilities Your Day-to-Day responsibilities include: Design and development of systems for the maintenance of the Azure Databricks, ETL processes, business intelligence and data ingestion pipelines for AI/ML use cases. Build, scale, and optimize GenAI and ML workloads across Databricks and other production environments, with strong attention to costefficiency, compliance, and robustness Build ML pipelines to train, serve, and monitor reinforcement learning or supervised learning models using Databricks and MLFlow. Create and support ETL pipelines and tables schemas in order to facilitate the accommodation of new and existing data sources for the Lakehouse on Databricks. Maintain data governance and data privacy standards. Collaborate with data architects, data scientists, analysts and other business consumers to quickly and thoroughly analyze business requirements to populate the data warehouse, optimized for reporting and analytics. Perform root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement. Maintain technical documentation and mentor junior data engineers on best practices in data engineering and Lakehouse architecture. Drive innovation and contribute to the development of cutting-edge Generative AI and analytical capabilities for the client's Next-Gen research enablement platform. Minimum Qualifications✔️: 7+ years of related experience with a bachelor’s degree Proven experience designing and deploying applications using Generative AI and large language models (e.g., GPT4, Claude, openweight LLMs) Experience with retrievalaugmented generation, embeddingsbased search, agent orchestration, or prompt chaining Familiarity with modern LLM/GenAI tools such as Langchain, LlamaIndex, HuggingFace Transformers, Semantic Kernel, or LangGraph. Advanced working SQL knowledge and experience working with relational and NoSQL databases, query authoring (SQL) as well as working familiarity with a variety of databases (e.g. SQL Server) Experience building and optimizing data pipelines on Azure Databricks. In depth knowledge and hands-on experience with Data Engineering, Machine learning, Data Warehousing and Delta Lake on Databricks. Highly proficient in Spark, Python, SQL Working knowledge of Fivetran is a bonus. A successful history of manipulating, processing and extracting value from large, disconnected datasets. Exceptional stakeholder management and communication skills to effectively communicate across global teams. Preferred Qualifications✔️: Knowledge of BI Tools like PowerBI etc. Experience building and deploying ML and feature engineering pipelines to production using MLFlow. Experience with building data pipeline from various business applications like Salesforce, NetSuite, etc. Knowledge of message queuing, stream processing, and highly scalable data stores. Experience or knowledge of working in a compliance-based environment, including building and deploying compliant software solutions throughout the software life cycle is a nice to have. Familiarity with cloud-based AI/ML services and Generative AI tools.
Requirements: 7+ years of related experience with a bachelor’s degree. Proven experience designing and deploying applications using Generative AI and large language models (e.g., GPT4, Claude, open-weight large language models (LLMs)). Understanding of retrieval-augmented generation, embeddings-based search, agent orchestration, or prompt chaining. Familiarity with modern LLM/GenAI tools such as LangChain, LlamaIndex, HuggingFace Transformers, Semantic Kernel, or LangGraph. Advanced knowledge of SQL and experience working with relational and NoSQL databases, query authoring (SQL), as well as working familiarity with a variety of databases (e.g., SQL Server). Experience building and optimizing data pipelines on Azure Databricks. In-depth knowledge of data engineering, machine learning, data warehousing, and Delta Lake on Databricks. Strong knowledge of Spark and Python. A successful history of manipulating, processing, and extracting value from large, disconnected datasets. Excellent skills in stakeholder management and communication, enabling effective communication across global teams. Nice To Have: Familiarity with Fivetran. Familiarity with BI tools like Power BI, etc. Understanding building and deploying ML and feature engineering pipelines to production using MLflow. Experience with building a data pipeline from various business applications like Salesforce, NetSuite, etc. Knowledge of message queuing, stream processing, and highly scalable data stores. Experience working in a compliance-based environment, including building and deploying compliant software solutions throughout the software life cycle. Familiarity with cloud-based AI/ML services and Generative AI tools. Responsibilities: Design and development of systems for the maintenance of the Azure Databricks, ETL processes, business intelligence, and data ingestion pipelines for AI/ML use cases. Build, scale, and optimize GenAI and ML workloads across Databricks and other production environments, with strong attention to cost-efficiency, compliance, and robustness. Build ML pipelines to train, serve, and monitor reinforcement learning or supervised learning models using Databricks and MLflow. Create and support ETL pipelines and table schemas to facilitate the integration of new and existing data sources into the Lakehouse on Databricks. Maintain data governance and data privacy standards. Collaborate with data architects, data scientists, analysts, and other business consumers to quickly and thoroughly analyze business requirements to populate the data warehouse, optimized for reporting and analytics. Perform root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement. Maintain technical documentation and mentor junior data engineers on best practices in data engineering and Lakehouse architecture. Drive innovation and contribute to the development of cutting-edge Generative AI and analytical capabilities for the Next-Gen research enablement platform. We Offer: US and EU projects based on advanced technologies. Competitive compensation based on skills and experience. Regular performance appraisals to support your growth. 15 vacation days, 10 national holidays, 5 sick days. Free tech webinars and meetups organized by Svitla. Reimbursement for private medical insurance. Personalized learning program tailored to your interests and skill development. Bonuses for article writing, public talks, and other activities. Fun corporate online\offline celebrations and activities. Awesome team, friendly and supportive community! Requirements: 7+ years of related experience with a bachelor’s degree. Proven experience designing and deploying applications using Generative AI and large language models (e.g., GPT4, Claude, open-weight large language models (LLMs)). Understanding of retrieval-augmented generation, embeddings-based search, agent orchestration, or prompt chaining. Familiarity with modern VV tools such as LangChain, LlamaIndex, HuggingFace Transformers, Semantic Kernel, or LangGraph. Advanced knowledge of SQL and experience working with relational and NoSQL databases, query authoring (SQL), as well as working familiarity with a variety of databases (e.g., SQL Server). Experience building and optimizing data pipelines on Azure Databricks. In-depth knowledge of data engineering, machine learning, data warehousing, and Delta Lake on Databricks. Strong knowledge of Spark and Python. A successful history of manipulating, processing, and extracting value from large, disconnected datasets. Excellent skills in stakeholder management and communication, enabling effective communication across global teams. Nice To Have: Familiarity with Fivetran. Familiarity with BI tools like Power BI, etc. Understanding building and deploying ML and feature engineering pipelines to production using MLflow. Experience with building a data pipeline from various business applications like Salesforce, NetSuite, etc. Knowledge of message queuing, stream processing, and highly scalable data stores. Experience working in a compliance-based environment, including building and deploying compliant software solutions throughout the software life cycle. Familiarity with cloud-based AI/ML services and Generative AI tools. Responsibilities: Design and development of systems for the maintenance of the Azure Databricks, ETL processes, business intelligence, and data ingestion pipelines for AI/ML use cases. Build, scale, and optimize GenAI and ML workloads across Databricks and other production environments, with strong attention to cost-efficiency, compliance, and robustness. Build ML pipelines to train, serve, and monitor reinforcement learning or supervised learning models using Databricks and MLflow. Create and support ETL pipelines and table schemas to facilitate the integration of new and existing data sources into the Lakehouse on Databricks. Maintain data governance and data privacy standards. Collaborate with data architects, data scientists, analysts, and other business consumers to quickly and thoroughly analyze business requirements to populate the data warehouse, optimized for reporting and analytics. Perform root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement. Maintain technical documentation and mentor junior data engineers on best practices in data engineering and Lakehouse architecture. Drive innovation and contribute to the development of cutting-edge Generative AI and analytical capabilities for the Next-Gen research enablement platform. We Offer: We Offer: US and EU projects based on advanced technologies. Competitive compensation based on skills and experience. Regular performance appraisals to support your growth. 15 vacation days, 10 national holidays, 5 sick days. Free tech webinars and meetups organized by Svitla. Reimbursement for private medical insurance. Personalized learning program tailored to your interests and skill development. Bonuses for article writing, public talks, and other activities. Fun corporate online\offline celebrations and activities. Awesome team, friendly and supportive community!
Svitla Systems Inc. is looking for a FULL STACK SOFTWARE ENGINEER (RUBY ON ON RAILS + REACT) Our client is a cloud platform for business spend management (BSM) that helps organizations manage their spending, procurement, invoicing, expenses, and supplier relationships. They provide a unified, cloud-based spend management platform that connects hundreds of organizations representing the Americas, EMEA, and APAC with millions of suppliers globally. The platform provides greater visibility into and control over how companies spend money. Small, medium, and large customers have used the platform to bring billions of dollars in cumulative spending under management. The company offers a comprehensive platform that helps organizations manage their spending, procurement, invoicing, expenses, and supplier relationships. Founded in 2006 and headquartered in San Mateo, California, they aim to streamline and optimize business processes, driving efficiency and cost savings. Requirements 3- 6 years of experience building enterprise, SaaS web applications using Ruby. Strong experience in full-stack web development with JavaScript, React, CSS/Less, Sphinx, Redis, Haml, and RSpec. Solid knowledge of Ruby on Rails. Contributed fixes/features to Rails Core, jQuery. Understanding of writing and/or maintaining a Rails plugin and/or RubyGems. Experience with relational SQL and NoSQL databases, such as MySQL, PostgreSQL, AWS Aurora, and Cassandra. Experience with database performance issues and query optimisation with MySQL. Proven expertise in performance optimization and monitoring tools. Strong knowledge of cloud platforms (e.g., AWS, Azure, or GCP) Experience with CI/CD tooling and software delivery and bundling mechanisms. Bachelor’s degree in computer science, information systems, computer engineering, systems analysis, or a related discipline, or equivalent work experience. A strong leader with extensive technical experience. A well-founded analytical approach to finding good solutions, a strong sense of responsibility, and excellent skills in communication and planning. Proactive in your approach and a strong team player. Responsibilities Implement a cloud-native analytics platform with high performance and scalability. Find creative and elegant solutions to complex problems. Be an advocate for Rails and best practices in software engineering. Work in an agile environment where quick iterations and constructive feedback are the norm. Build solid expertise in the entire client application suite and leverage this knowledge to better design application and data frameworks. Adhere to client iterative development processes to deliver concrete value each release while driving longer-term technical vision. Collaborate with cross-organizational teams, including Product Management, Integrations, Services, Support, and Operations, to ensure the overall success of software development, implementation, and deployment. We Offer US and EU projects based on advanced technologies. Competitive compensation based on skills and experience. Regular performance appraisals to support your growth. 15 vacation days, 10 national holidays, 5 sick days. Free tech webinars and meetups organized by Svitla. Reimbursement for private medical insurance. Personalized learning program tailored to your interests and skill development. Bonuses for article writing, public talks, and other activities. Fun corporate online\offline celebrations and activities. Awesome team, friendly and supportive community!
Svitla Systems Inc. is looking for a Senior Full Stack Engineer for a full-time position (40 hours per week) in India. Our client is a SaaS-based analytics platform that provides business intelligence (BI) users with a single point of access to all their enterprise reports. You’ll lead a development team for the internal platform. In this role, you'll architect and build high-performance web applications or components used to centralize the discovery, governance, and usage management across the data and analytics ecosystem, working across the stack using Java (Spring Boot), Angular/React etc., and relational and/or graph databases, and collaborate with product owners, product managers, and business stakeholders to deliver scalable, insightful solutions. You’ll also play a key role in driving the technical direction, mentoring junior engineers, and ensuring best practices in performance, security, and usability. Tech Stack: Java, Angular, SQL (will be migrating to Graph DB soon), Compu8, Azure, and AWS. Requirements 5+ years of experience in full-stack development, ideally in a data-centric environment. Solid knowledge of Java (Spring Boot), Angular (v16+)/React, and SQL and/or graph databases. Deep understanding of building web-based data applications or internal tools used for analytics. Familiarity with common data patterns, ETL workflows, and working with large datasets. Experience with RESTful APIs, caching strategies, and performance tuning. Strong understanding of data privacy and security concerns in analytics applications. Excellent communication skills and ability to collaborate with non-technical users. Responsibilities Lead the design and development of full-stack analytics tools, dashboards, and self-service data products. Build backend APIs and services that efficiently aggregate and serve data. Design responsive and intuitive frontend interfaces using Angular for real-time data interaction and visualization. Optimize SQL queries and work closely with data engineers on schema design and performance tuning. Collaborate with stakeholders to translate analytical requirements into robust technical solutions. Mentor junior developers and review code for maintainability and performance. Drive adoption of best practices in testing, deployment, and code quality. We Offer US and EU projects based on advanced technologies. Competitive compensation based on skills and experience. Regular performance appraisals to support your growth. 15 vacation days, 10 national holidays, 5 sick days. Free tech webinars and meetups organized by Svitla. Reimbursement for private medical insurance. Personalized learning program tailored to your interests and skill development. Bonuses for article writing, public talks, and other activities. Fun corporate online\offline celebrations and activities. Awesome team, friendly and supportive community!
Svitla Systems Inc. is looking for a Senior Data Engineer with .NET for a full-time position (40 hours per week) in India. Our client is a leading expert network that provides business and government professionals with opportunities to communicate with industry and subject-matter experts to answer research questions. Customers consult with these experts over the phone and in person at conferences, teleconferences, custom events, and workshops. They may also gather their primary research data through surveys, polls, or web-based offerings. Experts are categorized into six main industry sectors: healthcare, financial and business services, consumer goods and services, energy, industrials, basic materials, tech, media, telecom, and legal and regulatory. Since 2003, the company has provided its customers with primary research services, helping professionals gain a comprehensive understanding of a topic before making significant investments and/or business decisions. Their multinational customer list includes nine top 10 consulting firms, hundreds of hedge funds, and many of the largest private equity firms and Fortune-ranked companies. Requirements 8+ years of experience with .NET Core and Python. Experience with OOP principles and patterns, developing and delivering large-scale distributed systems. Strong understanding of grooming backlog items and ensuring that they are engineer-ready. Understanding microservices in AKS and building ETL pipelines and AI use cases in Databricks. Experience in unit and integration testing automation. Knowledge of ORMs such as Entity Framework Core and Dapper. Expertise in data modeling and SQL. Experience building data integrations and other system integrations using different technologies like Service Bus, Event Hubs, etc. Experience leveraging issue-tracking systems/wikis for documentation (Jira/Confluence). Willingness to learn designing fault-tolerant architecture on cloud deployments (load balancing, clustering, reducing/eliminating single points of failure, etc). Problem-solving skills and a proactive work style. Strong interpersonal and communication skills. Independent contributor with drive. Collaborative and teamwork-oriented. Accountable for work and take ownership of tasks. The ability to balance strategic and tactical methodologies in delivery. The ability to exhibit judgment when it comes to prioritization and overall team management. Responsibilities Collaborate with cross-functional teams to design, develop, test, and deploy software solutions. Develop and implement processes and procedures to ensure the quality of software. Maintain and improve codebase integrity by actively participating in code reviews and proactively identifying refactoring opportunities in maintained applications. Develop and support scalable microservices and ETL pipeline using .NET Core, Python, and Azure Platform Services. Implement unit and integration tests. Nice To Have 8+ years of experience developing software using C# and Python. 3+ years of experience in implementing CI/CD pipelines using YAML, Helm charts, etc. Hands-on experience with Azure platform services like AKS, Docker, APIM, Azure SQL, Cosmos DB, Storage account, Azure function, App Services, etc. Knowledge of Databricks or Snowflake (or similar platforms). Understanding of architecture diagramming tools, such as Visio or Lucidchart. Experience with developing applications following CQRS and DDD. Knowledge of Aspect-Oriented Programming. We Offer US and EU projects based on advanced technologies. Competitive Compensation Based On Skills And Experience. Regular performance appraisals to support your growth. 15 vacation days, 10 national holidays, 5 sick days. Free tech webinars and meetups organized by Svitla. Reimbursement for private medical insurance. Personalized learning program tailored to your interests and skill development. Bonuses for article writing, public talks, and other activities. Fun corporate online\offline celebrations and activities. Awesome team, friendly and supportive community!
Requirements 4–8 years of experience in building and maintaining data pipelines for enterprise/SaaS applications. Strong knowledge of Python. Solid understanding of relational SQL and query optimization. Understanding designing and implementing ETL workflows and data transformation processes (PySpark, or similar libraries for ETL/data transformation). Deep knowledge of Kafka (or similar pub/sub systems) for data streaming. Strong experience with Apache Airflow, or similar tools to schedule, monitor, and manage complex data pipelines Experience with AWS cloud (data storage, compute, and managed services). Understanding how to integrate datasets into BI/reporting tools (Tableau, Power BI, or QuickSight). Experience with CI/CD tooling for data pipeline deployment. Nice To Have Familiarity with AI/ML-based data cleansing, deduplication, and entity resolution techniques. Familiarity with microservices and event-driven architecture. Knowledge of performance tuning and monitoring tools for data workflows. Responsibilities Implement a cloud-native analytics platform with high performance and scalability. Build an API-first infrastructure for data in and data out. Build data ingestion capabilities for the data, as well as external spend data. Leverage data classification AI algorithms to cleanse and harmonize data. Own data modelling, microservice orchestration, monitoring & alerting. Develop comprehensive expertise in the entire application suite and leverage this knowledge to design more effective applications and data frameworks. Adhere to iterative development processes to deliver concrete value each release while driving longer-term technical vision. Collaborate with cross-organizational teams, including product management, integrations, services, support, and operations, to ensure the overall success of software development, implementation, and deployment. We Offer US and EU projects based on advanced technologies. Competitive Compensation Based On Skills And Experience. Regular performance appraisals to support your growth. 15 vacation days, 10 national holidays, 5 sick days. Free tech webinars and meetups organized by Svitla. Reimbursement for private medical insurance. Personalized learning program tailored to your interests and skill development. Bonuses for article writing, public talks, and other activities. Fun corporate online\offline celebrations and activities. Awesome team, friendly and supportive community! Requirments 4–8 years of experience in building and maintaining data pipelines for enterprise/SaaS applications. Strong knowledge of Python. Solid understanding of relational SQL and query optimization. Understanding designing and implementing ETL workflows and data transformation processes (PySpark, or similar libraries for ETL/data transformation). Deep knowledge of Kafka (or similar pub/sub systems) for data streaming. Strong experience with Apache Airflow, or similar tools to schedule, monitor, and manage complex data pipelines Exerience with AWS cloud (data storage, compute, and managed services). Understanding how to integrate datasets into BI/reporting tools (Tableau, Power BI, or QuickSight). Experience with CI/CD tooling for data pipeline deployment. Responsibilities Implement a cloud-native analytics platform with high performance and scalability. Build an API-first infrastructure for data in and data out. Build data ingestion capabilities for the data, as well as external spend data. Leverage data classification AI algorithms to cleanse and harmonize data. Own data modelling, microservice orchestration, monitoring & alerting. Develop comprehensive expertise in the entire application suite and leverage this knowledge to design more effective applications and data frameworks. Adhere to iterative development processes to deliver concrete value each release while driving longer-term technical vision. Collaborate with cross-organizational teams, including product management, integrations, services, support, and operations, to ensure the overall success of software development, implementation, and deployment. We Offer US and EU projects based on advanced technologies. Competitive compensation based on skills and experience. Regular performance appraisals to support your growth. 15 vacation days, 10 national holidays, 5 sick days. Free tech webinars and meetups organized by Svitla. Reimbursement for private medical insurance. Personalized learning program tailored to your interests and skill development. Bonuses for article writing, public talks, and other activities. Fun corporate online\offline celebrations and activities. Awesome team, friendly and supportive community! Nice To Have Familiarity with AI/ML-based data cleansing, deduplication, and entity resolution techniques. Familiarity with microservices and event-driven architecture. Knowledge of performance tuning and monitoring tools for data workflows.
Svitla Systems Inc. is looking for a Senior Software Engineer with Data Background for a full-time position (40 hours per week) in India. Our client is a cloud platform for business spend management (BSM) that helps organizations manage their spending, procurement, invoicing, expenses, and supplier relationships. They provide a unified, cloud-based spend management platform that connects hundreds of organizations representing the Americas, EMEA, and APAC with millions of suppliers globally. The platform provides greater visibility into and control over how companies spend money. Small, medium, and large customers have used the platform to bring billions of dollars in cumulative spending under management. The company offers a comprehensive platform that helps organizations manage their spending, procurement, invoicing, expenses, and supplier relationships. Founded in 2006 and headquartered in San Mateo, California, they aim to streamline and optimize business processes, driving efficiency and cost savings. Requirements 4–8 years of experience in building and maintaining data pipelines for enterprise/SaaS applications. Strong knowledge of Python. Solid understanding of relational SQL and query optimization. Understanding designing and implementing ETL workflows and data transformation processes (PySpark, or similar libraries for ETL/data transformation). Deep knowledge of Kafka (or similar pub/sub systems) for data streaming. Strong experience with Apache Airflow, or similar tools to schedule, monitor, and manage complex data pipelines Exerience with AWS cloud (data storage, compute, and managed services). Understanding how to integrate datasets into BI/reporting tools (Tableau, Power BI, or QuickSight). Experience with CI/CD tooling for data pipeline deployment. Nice to have Familiarity with AI/ML-based data cleansing, deduplication, and entity resolution techniques. Familiarity with microservices and event-driven architecture. Knowledge of performance tuning and monitoring tools for data workflows. Responsibilities Implement a cloud-native analytics platform with high performance and scalability. Build an API-first infrastructure for data in and data out. Build data ingestion capabilities for the data, as well as external spend data. Leverage data classification AI algorithms to cleanse and harmonize data. Own data modelling, microservice orchestration, monitoring & alerting. Develop comprehensive expertise in the entire application suite and leverage this knowledge to design more effective applications and data frameworks. Adhere to iterative development processes to deliver concrete value each release while driving longer-term technical vision. Collaborate with cross-organizational teams, including product management, integrations, services, support, and operations, to ensure the overall success of software development, implementation, and deployment.
Svitla Systems Inc. is looking for a Senior Software Engineer with Data Background for a full-time position (40 hours per week) in India. Our client is a cloud platform for business spend management (BSM) that helps organizations manage their spending, procurement, invoicing, expenses, and supplier relationships. They provide a unified, cloud-based spend management platform that connects hundreds of organizations representing the Americas, EMEA, and APAC with millions of suppliers globally. The platform provides greater visibility into and control over how companies spend money. Small, medium, and large customers have used the platform to bring billions of dollars in cumulative spending under management. The company offers a comprehensive platform that helps organizations manage their spending, procurement, invoicing, expenses, and supplier relationships. Founded in 2006 and headquartered in San Mateo, California, they aim to streamline and optimize business processes, driving efficiency and cost savings. Requirements 48 years of experience in building and maintaining data pipelines for enterprise/SaaS applications. Strong knowledge of Python. Solid understanding of relational SQL and query optimization. Understanding designing and implementing ETL workflows and data transformation processes (PySpark, or similar libraries for ETL/data transformation). Deep knowledge of Kafka (or similar pub/sub systems) for data streaming. Strong experience with Apache Airflow, or similar tools to schedule, monitor, and manage complex data pipelines Exerience with AWS cloud (data storage, compute, and managed services). Understanding how to integrate datasets into BI/reporting tools (Tableau, Power BI, or QuickSight). Experience with CI/CD tooling for data pipeline deployment. Nice to have Familiarity with AI/ML-based data cleansing, deduplication, and entity resolution techniques. Familiarity with microservices and event-driven architecture. Knowledge of performance tuning and monitoring tools for data workflows. Responsibilities Implement a cloud-native analytics platform with high performance and scalability. Build an API-first infrastructure for data in and data out. Build data ingestion capabilities for the data, as well as external spend data. Leverage data classification AI algorithms to cleanse and harmonize data. Own data modelling, microservice orchestration, monitoring & alerting. Develop comprehensive expertise in the entire application suite and leverage this knowledge to design more effective applications and data frameworks. Adhere to iterative development processes to deliver concrete value each release while driving longer-term technical vision. Collaborate with cross-organizational teams, including product management, integrations, services, support, and operations, to ensure the overall success of software development, implementation, and deployment.