Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 9.0 years
0 Lacs
karnataka
On-site
As a Data Engineer at Lifesight, you will play a crucial role in the Data and Business Intelligence organization by focusing on deep data engineering projects. Joining the data platform team in Bengaluru, you will have the opportunity to contribute to defining the technical strategy and data engineering team culture in India. Your responsibilities will include designing and constructing data platforms and services, as well as managing data infrastructure in cloud environments to support strategic business decisions across Lifesight products. You will be expected to build highly scalable distributed data processing systems, data solutions, and data pipelines that optimize data quality and are resilient to poor-quality data sources. Additionally, you will own data mapping, business logic, transformations, and data quality, while participating in architecture discussions, influencing the product roadmap, and taking ownership of new projects. The ideal candidate for this role should possess proficiency in Python and PySpark, a deep understanding of Apache Spark, experience with big data technologies such as HDFS, YARN, Map-Reduce, Hive, Kafka, Spark, Airflow, and Presto, and familiarity with distributed database systems. Experience working with various file formats like Parquet, Avro, and NoSQL databases, as well as AWS and GCP, is preferred. A minimum of 5 years of professional experience as a data or software engineer is required for this full-time position. If you are a self-starter who is passionate about data engineering, ready to work with big data technologies, and eager to collaborate with a team of engineers while mentoring others, we encourage you to apply for this exciting opportunity at Lifesight.,
Posted 3 days ago
3.0 - 9.0 years
0 Lacs
Bengaluru, Karnataka
On-site
Category: Testing/Quality Assurance Main location: India, Karnataka, Bangalore Position ID: J0725-1442 Employment Type: Full Time Position Description: Position Description Company Profile: Founded in 1976, CGI is among the largest independent IT and business consulting services firms in the world. With 94,000 consultants and professionals across the globe, CGI delivers an end-to-end portfolio of capabilities, from strategic IT and business consulting to systems integration, managed IT and business process services and intellectual property solutions. CGI works with clients through a local relationship model complemented by a global delivery network that helps clients digitally transform their organizations and accelerate results. CGI Fiscal 2024 reported revenue is CA$14.68 billion and CGI shares are listed on the TSX (GIB.A) and the NYSE (GIB). Learn more at cgi.com. Job Title: ETL Testing Engineer Position: Senior test engineer Experience: 3-9 Years Category: Quality assurance/Software Testing. Shift: 1-10 pm/UK Shift Main location: Chennai/Bangalore. Position ID: J0725-1442 Employment Type: Full Time Position Description: We are looking for an experienced DataStage tester to join our team. The ideal candidate should be passionate about coding and testing scalable and high-performance applications Your future duties and responsibilities: Develop and execute ETL test cases to validate data extraction, transformation, and loading processes. Write complex SQL queries to verify data integrity, consistency, and correctness across source and target systems. Automate ETL testing workflows using Python, PyTest, or other testing frameworks. Perform data reconciliation, schema validation, and data quality checks. Identify and report data anomalies, performance bottlenecks, and defects. Work closely with Data Engineers, Analysts, and Business Teams to understand data requirements. Design and maintain test data sets for validation. Implement CI/CD pipelines for automated ETL testing (Jenkins, GitLab CI, etc.). Document test results, defects, and validation reports. Required qualifications to be successful in this role: ETL Testing: Strong experience in testing Informatica, Talend, SSIS, Databricks, or similar ETL tools. SQL: Advanced SQL skills (joins, aggregations, subqueries, stored procedures). Python: Proficiency in Python for test automation (Pandas, PySpark, PyTest). Databases: Hands-on experience with RDBMS (Oracle, SQL Server, PostgreSQL) & NoSQL (MongoDB, Cassandra). Big Data Testing (Good to Have): Hadoop, Hive, Spark, Kafka. Testing Tools: Knowledge of Selenium, Airflow, Great Expectations, or similar frameworks. Version Control: Git, GitHub/GitLab. CI/CD: Jenkins, Azure DevOps, or similar. Soft Skills: Strong analytical and problem-solving skills. Ability to work in Agile/Scrum environments. Good communication skills for cross-functional collaboration. Preferred Qualifications: Experience with cloud platforms (AWS, Azure). Knowledge of Data Warehousing concepts (Star Schema, Snowflake Schema). Certification in ETL Testing, SQL, or Python is a plus. Skills: Data Warehousing MS SQL Server Python What you can expect from us: Together, as owners, let’s turn meaningful insights into action. Life at CGI is rooted in ownership, teamwork, respect and belonging. Here, you’ll reach your full potential because… You are invited to be an owner from day 1 as we work together to bring our Dream to life. That’s why we call ourselves CGI Partners rather than employees. We benefit from our collective success and actively shape our company’s strategy and direction. Your work creates value. You’ll develop innovative solutions and build relationships with teammates and clients while accessing global capabilities to scale your ideas, embrace new opportunities, and benefit from expansive industry and technology expertise. You’ll shape your career by joining a company built to grow and last. You’ll be supported by leaders who care about your health and well-being and provide you with opportunities to deepen your skills and broaden your horizons. Come join our team—one of the largest IT and business consulting services firms in the world.
Posted 3 days ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
You will be responsible for working on Google Cloud Platform (GCP) projects, specifically focusing on the following skills: - Strong proficiency in Bigquery - Experience in ETLs to Google Cloud Platform - Proficient in SQL writing - Knowledge of Python programming - Familiarity with PostgreSQL and MongoDB databases - Experience with Data Build Tool (DBT) - Working knowledge of Airflow - Familiarity with Google Cloud Composer and Google Cloud PubSub - Experience in data extraction and transformation Ideal candidates should have a notice period of 0 to 45 days and possess a Bachelor's degree in a related field. This position is based in Bangalore. If you are interested in this opportunity, please send your resume to career@krazymantra.com.,
Posted 3 days ago
3.0 - 5.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Expedia Group brands power global travel for everyone, everywhere. We design cutting-edge tech to make travel smoother and more memorable, and we create groundbreaking solutions for our partners. Our diverse, vibrant, and welcoming community is essential in driving our success. Why Join Us? To shape the future of travel, people must come first. Guided by our Values and Leadership Agreements, we foster an open culture where everyone belongs, differences are celebrated and know that when one of us wins, we all win. We provide a full benefits package, including exciting travel perks, generous time-off, parental leave, a flexible work model (with some pretty cool offices), and career development resources, all to fuel our employees' passion for travel and ensure a rewarding career journey. We’re building a more open world. Join us. As an Infrastructure Engineer, you will be responsible for the technical design, planning, implementation, and optimization of performance tuning and recovery procedures for critical enterprise systems and applications. You will serve as the technical authority in system administration for complex SaaS, local, and cloud-based environments. Your role is critical in ensuring the high availability, reliability, and scalability of our infrastructure components. You will also be involved in designing philosophies, tools, and processes to enable the rapid delivery of evolving products. In This Role You Will Design, configure, and document cloud-based infrastructures using AWS Virtual Private Cloud (VPC) and EC2 instances in AWS. Secure and monitor hosted production SaaS environments provided by third-party partners. Define, document, and manage network configurations within AWS VPCs and between VPCs and data center networks, including firewall, DNS, and ACL configurations. Lead the design and review of developer work on DevOps tools and practices. Ensure high availability and reliability of infrastructure components through monitoring and performance tuning. Implement and maintain security measures to protect infrastructure from threats. Collaborate with cross-functional teams to design and deploy scalable solutions. Automate repetitive tasks and improve processes using scripting languages such as Python, PowerShell, or BASH. Support Airflow DAGs in the Data Lake, utilizing the Spark framework and Big Data technologies. Provide support for infrastructure-related issues and conduct root cause analysis. Develop and maintain documentation for infrastructure configurations and procedures. Administer databases, handle data backups, monitor databases, and manage data rotation. Work with RDBMS and NoSQL systems, leading stateful data migration between different data systems. Experience & Qualifications Bachelor’s or Master’s degree in Information Science, Computer Science, Business, or equivalent work experience. 3-5 years of experience with Amazon Web Services, particularly VPC, S3, EC2, and EMR. Experience in setting up new VPCs and integrating them with existing networks is highly desirable. Experience in maintaining infrastructure for Data Lake/Big Data systems built on the Spark framework and Hadoop technologies. Experience with Active Directory and LDAP setup, maintenance, and policies. Workday certification is preferred but not required. Exposure to Workday Integrations and Configuration is preferred. Strong knowledge of networking concepts and technologies. Experience with infrastructure automation tools (e.g., Terraform, Ansible, Chef). Familiarity with containerization technologies like Docker and Kubernetes. Excellent problem-solving skills and attention to detail. Strong verbal and written communication skills. Understanding of Agile project methodologies, including Scrum and Kanban, is required. Accommodation requests If you need assistance with any part of the application or recruiting process due to a disability, or other physical or mental health conditions, please reach out to our Recruiting Accommodations Team through the Accommodation Request. We are proud to be named as a Best Place to Work on Glassdoor in 2024 and be recognized for award-winning culture by organizations like Forbes, TIME, Disability:IN, and others. Expedia Group's family of brands includes: Brand Expedia®, Hotels.com®, Expedia® Partner Solutions, Vrbo®, trivago®, Orbitz®, Travelocity®, Hotwire®, Wotif®, ebookers®, CheapTickets®, Expedia Group™ Media Solutions, Expedia Local Expert®, CarRentals.com™, and Expedia Cruises™. © 2024 Expedia, Inc. All rights reserved. Trademarks and logos are the property of their respective owners. CST: 2029030-50 Employment opportunities and job offers at Expedia Group will always come from Expedia Group’s Talent Acquisition and hiring teams. Never provide sensitive, personal information to someone unless you’re confident who the recipient is. Expedia Group does not extend job offers via email or any other messaging tools to individuals with whom we have not made prior contact. Our email domain is @expediagroup.com. The official website to find and apply for job openings at Expedia Group is careers.expediagroup.com/jobs. Expedia is committed to creating an inclusive work environment with a diverse workforce. All qualified applicants will receive consideration for employment without regard to race, religion, gender, sexual orientation, national origin, disability or age.
Posted 3 days ago
6.0 - 10.0 years
0 Lacs
pune, maharashtra
On-site
As a Senior Data Engineer at our Pune location, you will play a critical role in designing, developing, and maintaining scalable data pipelines and architectures using Data bricks on Azure/AWS cloud platforms. With 6 to 9 years of experience in the field, you will collaborate with stakeholders to integrate large datasets, optimize performance, implement ETL/ELT processes, ensure data governance, and work closely with cross-functional teams to deliver accurate solutions. Your responsibilities will include building, maintaining, and optimizing data workflows, integrating datasets from various sources, tuning pipelines for performance and scalability, implementing ETL/ELT processes using Spark and Data bricks, ensuring data governance, collaborating with different teams, documenting data pipelines, and developing automated processes for continuous integration and deployment of data solutions. To excel in this role, you should have 6 to 9 years of hands-on experience as a Data Engineer, expertise in Apache Spark, Delta Lake, Azure/AWS Data bricks, proficiency in Python, Scala, or Java, advanced SQL skills, experience with cloud data platforms, data warehousing solutions, data modeling, ETL tools, version control systems, and automation tools. Additionally, soft skills such as problem-solving, attention to detail, and ability to work in a fast-paced environment are essential. Nice to have skills include experience with Data bricks SQL and Data bricks Delta, knowledge of machine learning concepts, and experience in CI/CD pipelines for data engineering solutions. Joining our team offers challenging work with international clients, growth opportunities, a collaborative culture, and global project involvement. We provide competitive salaries, flexible work schedules, health insurance, performance-based bonuses, and other standard benefits. If you are passionate about data engineering, possess the required skills and qualifications, and thrive in a dynamic and innovative environment, we welcome you to apply for this exciting opportunity.,
Posted 3 days ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
About Us Welcome to FieldAssist, where Innovation meets excellence!! We are a top-tier SaaS platform that specializes in optimizing Route-to-Market strategies and enhancing brand relationships within the CPG partner ecosystem. With over 1,00,000 sales users representing over 600+ CPG brands across 10+ countries in South East Asia, the Middle East, and Africa, we reach 10,000 distributors and 7.5 million retail outlets every day. FieldAssist is a 'Proud Partner to Great Brands' like Godrej Consumers, Saro Africa, Danone, Tolaram, Haldiram’s, Eureka Forbes, Bisleri, Nilon’s, Borosil, Adani Wilmar, Henkel, Jockey, Emami, Philips, Ching’s and Mamaearth among others. Do you crave a dynamic work environment where you can excel and enjoy the journey? We have the perfect opportunity for you!! Responsibilities Build and maintain robust backend services and REST APIs using Python (Django, Flask, or FastAPI). Develop end-to-end ML pipelines including data preprocessing, model inference, and result delivery. Integrate and scale AI/LLM models, including RAG (Retrieval Augmented Generation) and intelligent agents. Design and optimize ETL pipelines and data workflows using tools like Apache Airflow or Prefect. Work with Azure SQL and Cosmos DB for transactional and NoSQL workloads. Implement and query vector databases for similarity search and embedding-based retrieval (e.g., Azure Cognitive Search, FAISS, or Pinecone). Deploy services on Azure Cloud, using Docker and CI/CD practices. Collaborate with cross-functional teams to bring AI features into product experiences. Write unit/integration tests and participate in code reviews to ensure high code quality. e and maintain applications using the .NET platform and environment Who we're looking for: Strong command of Python 3.x, with experience in Django, Flask, or FastAPI. Experience building and consuming RESTful APIs in production systems. Solid grasp of ML workflows, including model integration, inferencing, and LLM APIs (e.g., OpenAI). Familiarity with RAG, vector embeddings, and prompt-based workflows. Proficient with Azure SQL and Cosmos DB (NoSQL). Experience with vector databases (e.g., FAISS, Pinecone, Azure Cognitive Search). Proficiency in containerization using Docker, and deployment on Azure Cloud. Experience with data orchestration tools like Apache Airflow. Comfortable working with Git, CI/CD pipelines, and observability tools. Strong debugging, testing (pytest/unittest), and optimization skills. Good to Have: Experience with LangChain, transformers, or LLM fine-tuning. Exposure to MLOps practices and Azure ML. Hands-on experience with PySpark for data processing at scale. Contributions to open-source projects or AI toolkits. Background working in startup-like environments or cross-functional product teams. FieldAssist on the Web: Website: https://www.fieldassist.com/people-philosophy-culture/ Culture Book: https://www.fieldassist.com/fa-culture-book CEO's Message: https://www.youtube.com/watch?v=bl_tM5E5hcw LinkedIn: https://www.linkedin.com/company/fieldassist/
Posted 3 days ago
58.0 years
0 Lacs
Delhi, India
On-site
Job Summary We are looking for a skilled Data Modeler / Architect with 58 years of experience in designing, implementing, and optimizing robust data architectures in the financial payments industry. The ideal candidate will have deep expertise in SQL, data modeling, ETL/ELT pipeline development, and cloud-based data platforms such as Databricks or Snowflake. You will play a key role in designing scalable data models, orchestrating reliable data workflows, and ensuring the integrity and performance of mission-critical financial datasets. This is a highly collaborative role interfacing with engineering, analytics, product, and compliance teams. Key Responsibilities Design, implement, and maintain logical and physical data models to support transactional, analytical, and reporting systems. Develop and manage scalable ETL/ELT pipelines for processing large volumes of financial transaction data. Tune and optimize SQL queries, stored procedures, and data transformations for maximum performance. Build and manage data orchestration workflows using tools like Airflow, Dagster, or Luigi. Architect data lakes and warehouses using platforms like Databricks, Snowflake, BigQuery, or Redshift. Enforce and uphold data governance, security, and compliance standards (e.g., PCI-DSS, GDPR). Collaborate closely with data engineers, analysts, and business stakeholders to understand data needs and deliver solutions. Conduct data profiling, validation, and quality assurance to ensure clean and consistent data. Maintain clear and comprehensive documentation for data models, pipelines, and architecture. Required Skills & Qualifications 58 years of experience as a Data Modeler, Data Architect, or Senior Data Engineer in the financial/payments domain. Advanced SQL expertise, including query tuning, indexing, and performance optimization. Proficiency in developing ETL/ELT workflows using tools such as Spark, dbt, Talend, or Informatica. Experience with data orchestration frameworks: Airflow, Dagster, Luigi, etc. Strong hands-on experience with cloud-based data platforms like Databricks, Snowflake, or equivalents. Deep understanding of data warehousing principles: star/snowflake schema, slowly changing dimensions, etc. Familiarity with financial data structures, such as payment transactions, reconciliation, fraud patterns, and audit trails. Working knowledge of cloud services (AWS, GCP, or Azure) and data security best practices. Strong analytical thinking and problem-solving capabilities in high-scale environments. Preferred Qualifications Experience with real-time data pipelines (e.g., Kafka, Spark Streaming). Exposure to data mesh or data fabric architecture paradigms. Certifications in Snowflake, Databricks, or relevant cloud platforms. Knowledge of Python or Scala for data engineering tasks (ref:hirist.tech)
Posted 3 days ago
3.0 - 7.0 years
0 Lacs
telangana
On-site
You will be joining Teradata, a company that believes in empowering individuals with better information through its cloud analytics and data platform for AI. By providing harmonized data, trusted AI, and faster innovation, Teradata enables customers and their clients to make more informed decisions across various industries. As a part of the team, your responsibilities will include designing, developing, and maintaining scalable enterprise applications, data processing, and engineering pipelines. You will write efficient, scalable, and clean code primarily in Go (Golang), Java, or Python. Collaborating with cross-functional teams, you will define, design, and implement new features while ensuring the availability, reliability, and performance of deployed applications. Integrating with CI/CD pipelines will be crucial for seamless deployment and development cycles. Monitoring and optimizing application performance, troubleshooting issues, evaluating, investigating, and optimizing application performance, as well as resolving customer incidents and supporting Customer Support and Operations teams are also part of your role. You will work with a high-performing engineering team that values innovation, continuous learning, and open communication. The team focuses on mutual respect, empowering members, celebrating diverse perspectives, and fostering professional growth. This Individual Contributor role reports to the Engineering Manager. To be qualified for this role, you should have a Tech/M. Tech/MCA/MSc degree in CSE/IT or related disciplines, along with 3-5 years of relevant industry experience. Expertise in SQL and either Java or Golang is essential, as well as experience with Python, REST API in Linux environments, and working in public cloud environments like AWS, Azure, or Google Cloud. Excellent communication and teamwork skills are also required. Preferred qualifications include experience with containerization (Docker) and orchestration tools (Kubernetes), modern data engineering tools such as Airbyte, Airflow, and dbt, good knowledge of Java/Python and development experience, familiarity with Teradata database, proactive and solution-oriented mindset, passion for technology and continuous learning, ability to work independently while contributing to the team's success, creativity, adaptability, a strong sense of ownership, accountability, and a drive to make an impact. Teradata prioritizes a people-first culture, offering a flexible work model, focusing on well-being, and being an anti-racist company dedicated to fostering a diverse, equitable, and inclusive environment that values individuals for who they are.,
Posted 3 days ago
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
Zinnia is the leading technology platform for accelerating life and annuities growth, simplifying the experience of buying, selling, and administering insurance products. Our success is driven by a commitment to three core values: be bold, team up, deliver value. With over $180 billion in assets under administration, serving 100+ carrier clients, 2500 distributors, and partners, Zinnia enables more people to protect their financial futures. We are looking for an experienced Data Engineer to join our data engineering team. Your role will involve designing, building, and optimizing robust data pipelines and platforms that power our analytics, products, and decision-making. You will collaborate with data scientists, analysts, product managers, and other engineers to deliver scalable, efficient, and reliable data solutions. Your responsibilities will include designing, developing, and maintaining scalable big data pipelines using Spark (Scala or PySpark), Hive, and HDFS. You will also build and manage data workflows and orchestration using Airflow, write efficient production-grade code in languages like Python, Java, or Scala, and develop complex SQL queries for data transformation and reporting. Additionally, you will work on cloud platforms like AWS to deploy and manage data infrastructure and collaborate with data stakeholders to deliver high-quality data solutions. To be successful in this role, you should have strong experience with the Big Data stack, excellent programming skills, expertise in SQL, hands-on experience with Spark tuning and optimization, and familiarity with Airflow for data workflow orchestration. A degree in Computer Science, Engineering, or a related field, along with at least 5 years of experience as a Data Engineer, is required. You should also have a proven track record of delivering production-ready data pipelines in big data environments and possess strong analytical thinking, problem-solving, and communication skills. Preferred or nice-to-have skills include knowledge of the AWS ecosystem, experience with Trino or Presto for interactive querying, familiarity with Lakehouse formats, exposure to DBT for analytics engineering, experience with Kafka for streaming ingestion, and familiarity with monitoring tools like Prometheus and Grafana. Joining our team as a Data Engineer will provide you with the opportunity to work on cutting-edge technologies, collaborate with a diverse group of professionals, and contribute to impactful projects that shape the future of insurance technology.,
Posted 3 days ago
6.0 - 10.0 years
0 Lacs
karnataka
On-site
Job Description: As a Snowflake Admin with 6+ years of experience and the ability to join immediately, you will be responsible for administering and managing Snowflake environments. This includes configuring, ensuring security, and conducting maintenance tasks. Your role will involve monitoring and optimizing Snowflake performance, storage usage, and query efficiency to enhance overall system functionality. In this position, you will be required to implement and manage role-based access control (RBAC) and data security policies to safeguard sensitive information. Additionally, you will set up and oversee data sharing, data replication, and virtual warehouses to support various data operations effectively. You will be expected to automate administrative tasks using SQL, Snowflake CLI, or scripting languages such as Python and Bash. Your proficiency in these tools will be essential in streamlining processes and improving efficiency within the Snowflake environment. Furthermore, providing support for data integration tools and pipelines like Fivetran, dbt, Informatica, and Airflow will be part of your responsibilities. Key Skills: - Snowflake Admin Industry Type: IT/ Computers - Software Functional Area: Not specified Required Education: Bachelor Employment Type: Full Time, Permanent If you are looking for a dynamic opportunity to utilize your expertise in Snowflake administration, apply now with Job Code: GO/JC/668/2025. Join our team and work alongside our Recruiter, Christopher, in a contract hiring role.,
Posted 3 days ago
7.0 years
0 Lacs
Greater Kolkata Area
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Manager Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. Responsibilities: Job Description: · Analyses current business practices, processes, and procedures as well as identifying future business opportunities for leveraging Microsoft Azure Data & Analytics Services. · Provide technical leadership and thought leadership as a senior member of the Analytics Practice in areas such as data access & ingestion, data processing, data integration, data modeling, database design & implementation, data visualization, and advanced analytics. · Engage and collaborate with customers to understand business requirements/use cases and translate them into detailed technical specifications. · Develop best practices including reusable code, libraries, patterns, and consumable frameworks for cloud-based data warehousing and ETL. · Maintain best practice standards for the development or cloud-based data warehouse solutioning including naming standards. · Designing and implementing highly performant data pipelines from multiple sources using Apache Spark and/or Azure Databricks · Integrating the end-to-end data pipeline to take data from source systems to target data repositories ensuring the quality and consistency of data is always maintained · Working with other members of the project team to support delivery of additional project components (API interfaces) · Evaluating the performance and applicability of multiple tools against customer requirements · Working within an Agile delivery / DevOps methodology to deliver proof of concept and production implementation in iterative sprints. · Integrate Databricks with other technologies (Ingestion tools, Visualization tools). · Proven experience working as a data engineer · Highly proficient in using the spark framework (python and/or Scala) · Extensive knowledge of Data Warehousing concepts, strategies, methodologies. · Direct experience of building data pipelines using Azure Data Factory and Apache Spark (preferably in Databricks). · Hands on experience designing and delivering solutions using Azure including Azure Storage, Azure SQL Data Warehouse, Azure Data Lake, Azure Cosmos DB, Azure Stream Analytics · Experience in designing and hands-on development in cloud-based analytics solutions. · Expert level understanding on Azure Data Factory, Azure Synapse, Azure SQL, Azure Data Lake, and Azure App Service is required. · Designing and building of data pipelines using API ingestion and Streaming ingestion methods. · Knowledge of Dev-Ops processes (including CI/CD) and Infrastructure as code is essential. · Thorough understanding of Azure Cloud Infrastructure offerings. · Strong experience in common data warehouse modelling principles including Kimball. · Working knowledge of Python is desirable · Experience developing security models. · Databricks & Azure Big Data Architecture Certification would be plus · Must be team oriented with strong collaboration, prioritization, and adaptability skills required Mandatory skill sets: Azure Databricks Preferred skill sets: Azure Databricks Years of experience required: 7-10 Years Education qualification: BE, B.Tech, MCA, M.Tech Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Technology, Bachelor of Engineering Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills Databricks Platform Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Airflow, Apache Hadoop, Azure Data Factory, Coaching and Feedback, Communication, Creativity, Data Anonymization, Data Architecture, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Databricks Unified Data Analytics Platform, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling {+ 32 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date
Posted 3 days ago
5.0 - 9.0 years
0 Lacs
hyderabad, telangana
On-site
The Data Services ETL Developer specializes in data transformations and integration projects using Zeta's tools, 3rd Party software, and coding. Understanding CRM methodologies related to marketing operations is essential. Responsibilities include manipulating client and internal marketing data across various platforms, automating scripts for data transfer, building and managing cloud-based data pipelines using AWS services, managing tasks with competing priorities, and collaborating with technical staff to support a proprietary ETL environment. Collaborating with database/CRM, modelers, analysts, and application programmers is crucial for delivering results to clients. The ideal candidate should cover the US time-zone, be in the office a minimum of three days per week, have experience in database marketing, knowledge of US and International postal addresses (including SAP postal products), proficiency with AWS services (S3, Airflow, RDS, Athena), experience with Oracle and Snowflake SQL, familiarity with various tools like Snowflake, Airflow, GitLab, Grafana, LDAP, Open VPN, DCWEB, Postman, and Microsoft Excel. Additionally, knowledge of SQL Server, SFTP, PGP, large-scale customer databases, project life cycle, and proficiency with editors like Notepad++ and Ultra Edit is required. Strong communication, collaboration skills, and the ability to manage multiple tasks simultaneously are essential. Minimum qualifications include a Bachelors degree or equivalent with 5+ years of experience in database marketing and cloud-based technologies, a strong understanding of data engineering concepts and cloud infrastructure, as well as excellent oral and written communication skills.,
Posted 3 days ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Comcast brings together the best in media and technology. We drive innovation to create the world's best entertainment and online experiences. As a Fortune 50 leader, we set the pace in a variety of innovative and fascinating businesses and create career opportunities across a wide range of locations and disciplines. We are at the forefront of change and move at an amazing pace, thanks to our remarkable people, who bring cutting-edge products and services to life for millions of customers every day. If you share in our passion for teamwork, our vision to revolutionize industries and our goal to lead the future in media and technology, we want you to fast-forward your career at Comcast. Job Summary Responsible for contributing to the development and deployment of machine learning algorithms. Evaluates accuracy and functionality of machine learning algorithms as a part of a larger team. Contributes to translating application requirements into machine learning problem statements. Analyzes and evaluates solutions both internally generated as well as third party supplied. Contributes to developing ways to use machine learning to solve problems and discover new products, working on a portion of the problem and collaborating with more senior researchers as needed. Works with moderate guidance in own area of knowledge. Job Description Core Responsibilities About the Role: We are seeking an experienced Data Scientist to join our growing Operational Intelligence team. You will play a key role in building intelligent systems that help reduce alert noise, detect anomalies, correlate events, and proactively surface operational insights across our large-scale streaming infrastructure. You’ll work at the intersection of machine learning, observability, and IT operations, collaborating closely with Platform Engineers, SREs, Incident Managers, Operators and Developers to integrate smart detection and decision logic directly into our operational workflows. This role offers a unique opportunity to push the boundaries of AI/ML in large-scale operations. We welcome curious minds who want to stay ahead of the curve, bring innovative ideas to life, and improve the reliability of streaming infrastructure that powers millions of users globally. What You’ll Do Design and tune machine learning models for event correlation, anomaly detection, alert scoring, and root cause inference Engineer features to enrich alerts using service relationships, business context, change history, and topological data Apply NLP and ML techniques to classify and structure logs and unstructured alert messages Develop and maintain real-time and batch data pipelines to process alerts, metrics, traces, and logs Use Python, SQL, and time-series query languages (e.g., PromQL) to manipulate and analyze operational data Collaborate with engineering teams to deploy models via API integrations, automate workflows, and ensure production readiness Contribute to the development of self-healing automation, diagnostics, and ML-powered decision triggers Design and validate entropy-based prioritization models to reduce alert fatigue and elevate critical signals Conduct A/B testing, offline validation, and live performance monitoring of ML models Build and share clear dashboards, visualizations, and reporting views to support SREs, engineers, and leadership Participate in incident postmortems, providing ML-driven insights and recommendations for platform improvements Collaborate on the design of hybrid ML + rule-based systems to support dynamic correlation and intelligent alert grouping Lead and support innovation efforts including POCs, POVs, and exploration of emerging AI/ML tools and strategies Demonstrate a proactive, solution-oriented mindset with the ability to navigate ambiguity and learn quickly Participate in on-call rotations and provide operational support as needed Qualifications Bachelor's or Master's degree in Computer Science, Data Science, Machine Learning, Statistics or a related field 3+ years of experience building and deploying ML solutions in production environments 2+ years working with AIOps, observability, or real-time operations data Strong coding skills in Python (including pandas, NumPy, Scikit-learn, PyTorch, or TensorFlow) Experience working with SQL, time-series query languages (e.g., PromQL), and data transformation in pandas or Spark Familiarity with LLMs, prompt engineering fundamentals, or embedding-based retrieval (e.g., sentence-transformers, vector DBs) Strong grasp of modern ML techniques including gradient boosting (XGBoost/LightGBM), autoencoders, clustering (e.g., HDBSCAN), and anomaly detection Experience managing structured + unstructured data, and building features from logs, alerts, metrics, and traces Familiarity with real-time event processing using tools like Kafka, Kinesis, or Flink Strong understanding of model evaluation techniques including precision/recall trade-offs, ROC, AUC, calibration Comfortable working with relational (PostgreSQL), NoSQL (MongoDB), and time-series (InfluxDB, Prometheus) databases Ability to collaborate effectively with SREs, platform teams, and participate in Agile/DevOps workflows Clear written and verbal communication skills to present findings to technical and non-technical stakeholders Comfortable working across Git, Confluence, JIRA, & collaborative agile environments Nice To Have Experience building or contributing to the AIOps platform (e.g., Moogsoft, BigPanda, Datadog, Aisera, Dynatrace, BMC etc.) Experience working in streaming media, OTT platforms, or large-scale consumer services Exposure to Infrastructure as Code (Terraform, Pulumi) and modern cloud-native tooling Working experience with Conviva, Touchstream, Harmonic, New Relic, Prometheus, & event- based alerting tools Hands-on experience with LLMs in operational contexts (e.g., classification of alert text, log summarization, retrieval-augmented generation) Familiarity with vector databases (e.g., FAISS, Pinecone, Weaviate) and embeddings-based search for observability data Experience using MLflow, SageMaker, or Airflow for ML workflow orchestration Knowledge of LangChain, Haystack, RAG pipelines, or prompt templating libraries Exposure to MLOps practices (e.g., model monitoring, drift detection, explainability tools like SHAP or LIME) Experience with containerized model deployment using Docker or Kubernetes Use of JAX, Hugging Face Transformers, or LLaMA/Claude/Command-R models in experimentation Experience designing APIs in Python or Go to expose models as services Cloud proficiency in AWS/GCP, especially for distributed training, storage, or batch inferencing Contributions to open-source ML or DevOps communities, or participation in AIOps research/benchmarking efforts Certifications in cloud architecture, ML engineering, or data science specialization Comcast is proud to be an equal opportunity workplace. We will consider all qualified applicants for employment without regard to race, color, religion, age, sex, sexual orientation, gender identity, national origin, disability, veteran status, genetic information, or any other basis protected by applicable law. Base pay is one part of the Total Rewards that Comcast provides to compensate and recognize employees for their work. Most sales positions are eligible for a Commission under the terms of an applicable plan, while most non-sales positions are eligible for a Bonus. Additionally, Comcast provides best-in-class Benefits to eligible employees. We believe that benefits should connect you to the support you need when it matters most, and should help you care for those who matter most. That’s why we provide an array of options, expert guidance and always-on tools, that are personalized to meet the needs of your reality – to help support you physically, financially and emotionally through the big milestones and in your everyday life. Please visit the compensation and benefits summary on our careers site for more details. Education Bachelor's Degree While possessing the stated degree is preferred, Comcast also may consider applicants who hold some combination of coursework and experience, or who have extensive related professional experience. Relevant Work Experience 2-5 Years
Posted 3 days ago
10.0 - 13.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
About Aeris: For more than three decades, Aeris has been a trusted cellular IoT leader enabling the biggest IoT programs and opportunities across Automotive, Utilities and Energy, Fleet Management and Logistics, Medical Devices, and Manufacturing. Our IoT technology expertise serves a global ecosystem of 7,000 enterprise customers and 30 mobile network operator partners, and 80 million IoT devices across the world. Aeris powers today’s connected smart world with innovative technologies and borderless connectivity that simplify management, enhance security, optimize performance, and drive growth. Built from the ground up for IoT and road-tested at scale, Aeris IoT Services are based on the broadest technology stack in the industry, spanning connectivity up to vertical solutions. As veterans of the industry, we know that implementing an IoT solution can be complex, and we pride ourselves on making it simpler. Our company is in an enviable spot. We’re profitable, and both our bottom line and our global reach are growing rapidly. We’re playing in an exploding market where technology evolves daily and new IoT solutions and platforms are being created at a fast pace. A few things to know about us: We put our customers first . When making decisions, we always seek to do what is right for our customer first, our company second, our teams third, and individual selves last We do things differently. As a pioneer in a highly competitive industry that is poised to reshape every sector of the global economy, we cannot fall back on old models. Rather, we must chart our own path and strive to out-innovate, out-learn, out-maneuver and out-pace the competition on the way We walk the walk on diversity. We’re a brilliant and eclectic mix of ethnicities, religions, industry experiences, sexual orientations, generations and more – and that’s by design. We see diverse perspectives as a core competitive advantage Integrity is essential. We believe in doing things well – and doing them right. Integrity is a core value here: you’ll see it embodied in our staff, our management approach and growing social impact work (we have a VP devoted to it). You’ll also see it embodied in the way we manage people and our HR issues: we expect employees and managers to deal with issues directly, immediately and with the utmost respect for each other and for the Company We are owners. Strong managers enable and empower their teams to figure out how to solve problems. You will be no exception, and will have the ownership, accountability and autonomy needed to be truly creative Position Title: Senior Tech Lead Experience Required: 10-13 years Location : Noida Job Description Essential Duties & Responsibilities: Research, design and development of next generation Applications supporting billions of transactions and data. Design and development of cloud-based solutions with extensive hand-on experience on Big Data, distributed programming, ETL workflows and orchestration tools. Design and development of microservices in Java / J2EE, Node JS, with experience in containerization using Docker, Kubernetes Focus should be on developing cloud native applications utilizing cloud services. Work with product managers/owners & internal as well as external customers following Agile methodology Practice rapid iterative product development to mature promising concepts into successful products. Execute with a sense of urgency to drive ideas into products through the innovation life-cycle, , demo/evangelize. Should be experienced in using GenAI for faster development Skills Required Proven experience of development of high performing, scalable cloud applications using various cloud development stacks & services. Proven experience of Containers, GCP, AWS Cloud platforms. Deep skills in Java / Python / Node JS / SQL / PLSQL Working experience with Spring boot, ORM, JPA, Transaction Management, Concurrency, Design Patterns. Good Understanding of NoSQL databases like MongoDB. Experience on workflow and orchestration tools like NiFi, Airflow would be big plus. Deep understanding of best design and software engineering practices design principles and patterns, unit testing, performance engineering. Good understanding Distributed Architecture, plug-in and APIs. Prior Experience with Security, Cloud, and Container Security possess great advantage. Hands-on experience in building applications on various platforms, with deep focus on usability, performance and integration with downstream REST Web services. Exposure to Generative AI models, prompt engineering, or integration with APIs like OpenAI, Cohere, or Google Gemini Qualifications/Requirements B. Tech/Masters in Computer Science/Engineering, Electrical/Electronic Engineering. Aeris may conduct background checks to verify the information provided in your application and assess your suitability for the role. The scope and type of checks will comply with the applicable laws and regulations of the country where the position is based. Additional detail will be provided via the formal application process. Aeris walks the walk on diversity. We’re a brilliant mix of varying ethnicities, religions, cultures, sexual orientations, gender identities, ages and professional/personal/military experiences – and that’s by design. Diverse perspectives are essential to our culture, innovative process and competitive edge. Aeris is proud to be an equal opportunity employer.
Posted 3 days ago
10.0 - 13.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
About Aeris: For more than three decades, Aeris has been a trusted cellular IoT leader enabling the biggest IoT programs and opportunities across Automotive, Utilities and Energy, Fleet Management and Logistics, Medical Devices, and Manufacturing. Our IoT technology expertise serves a global ecosystem of 7,000 enterprise customers and 30 mobile network operator partners, and 80 million IoT devices across the world. Aeris powers today’s connected smart world with innovative technologies and borderless connectivity that simplify management, enhance security, optimize performance, and drive growth. Built from the ground up for IoT and road-tested at scale, Aeris IoT Services are based on the broadest technology stack in the industry, spanning connectivity up to vertical solutions. As veterans of the industry, we know that implementing an IoT solution can be complex, and we pride ourselves on making it simpler. Our company is in an enviable spot. We’re profitable, and both our bottom line and our global reach are growing rapidly. We’re playing in an exploding market where technology evolves daily and new IoT solutions and platforms are being created at a fast pace. A few things to know about us: We put our customers first . When making decisions, we always seek to do what is right for our customer first, our company second, our teams third, and individual selves last We do things differently. As a pioneer in a highly competitive industry that is poised to reshape every sector of the global economy, we cannot fall back on old models. Rather, we must chart our own path and strive to out-innovate, out-learn, out-maneuver and out-pace the competition on the way We walk the walk on diversity. We’re a brilliant and eclectic mix of ethnicities, religions, industry experiences, sexual orientations, generations and more – and that’s by design. We see diverse perspectives as a core competitive advantage Integrity is essential. We believe in doing things well – and doing them right. Integrity is a core value here: you’ll see it embodied in our staff, our management approach and growing social impact work (we have a VP devoted to it). You’ll also see it embodied in the way we manage people and our HR issues: we expect employees and managers to deal with issues directly, immediately and with the utmost respect for each other and for the Company We are owners. Strong managers enable and empower their teams to figure out how to solve problems. You will be no exception, and will have the ownership, accountability and autonomy needed to be truly creative Position Title: Senior Tech Lead Experience Required: 10-13 years Location : Noida Job Description Essential Duties & Responsibilities: Research, design and development of next generation Applications supporting billions of transactions and data Design and development of cloud-based solutions with extensive hand-on experience on Big Data, distributed programming, ETL workflows and orchestration tools Design and development of microservices in Java / J2EE, Node JS, with experience in containerization using Docker, Kubernetes Focus should be on developing cloud native applications utilizing cloud services Work with product managers/owners & internal as well as external customers following Agile methodology Practice rapid iterative product development to mature promising concepts into successful products Execute with a sense of urgency to drive ideas into products through the innovation life-cycle, , demo/evangelize Should be experienced in using GenAI for faster development Skills Required Proven experience of development of high performing, scalable cloud applications using various cloud development stacks & services Proven experience of Containers, GCP, AWS Cloud platforms Deep skills in Java / Python / Node JS / SQL / PLSQL Working experience with Spring boot, ORM, JPA, Transaction Management, Concurrency, Design Patterns Good Understanding of NoSQL databases like MongoDB. Experience on workflow and orchestration tools like NiFi, Airflow would be big plus Deep understanding of best design and software engineering practices design principles and patterns, unit testing, performance engineering Good understanding Distributed Architecture, plug-in and APIs Prior Experience with Security, Cloud, and Container Security possess great advantage Hands-on experience in building applications on various platforms, with deep focus on usability, performance and integration with downstream REST Web services Exposure to Generative AI models, prompt engineering, or integration with APIs like OpenAI, Cohere, or Google Gemini Qualifications/Requirements B. Tech/Masters in Computer Science/Engineering, Electrical/Electronic Engineering Aeris may conduct background checks to verify the information provided in your application and assess your suitability for the role. The scope and type of checks will comply with the applicable laws and regulations of the country where the position is based. Additional detail will be provided via the formal application process. Aeris walks the walk on diversity. We’re a brilliant mix of varying ethnicities, religions, cultures, sexual orientations, gender identities, ages and professional/personal/military experiences – and that’s by design. Diverse perspectives are essential to our culture, innovative process and competitive edge. Aeris is proud to be an equal opportunity employer.
Posted 3 days ago
6.0 - 10.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
About Aeris: For more than three decades, Aeris has been a trusted cellular IoT leader enabling the biggest IoT programs and opportunities across Automotive, Utilities and Energy, Fleet Management and Logistics, Medical Devices, and Manufacturing. Our IoT technology expertise serves a global ecosystem of 7,000 enterprise customers and 30 mobile network operator partners, and 80 million IoT devices across the world. Aeris powers today’s connected smart world with innovative technologies and borderless connectivity that simplify management, enhance security, optimize performance, and drive growth. Built from the ground up for IoT and road-tested at scale, Aeris IoT Services are based on the broadest technology stack in the industry, spanning connectivity up to vertical solutions. As veterans of the industry, we know that implementing an IoT solution can be complex, and we pride ourselves on making it simpler. Our company is in an enviable spot. We’re profitable, and both our bottom line and our global reach are growing rapidly. We’re playing in an exploding market where technology evolves daily and new IoT solutions and platforms are being created at a fast pace. A few things to know about us: We put our customers first . When making decisions, we always seek to do what is right for our customer first, our company second, our teams third, and individual selves last We do things differently. As a pioneer in a highly competitive industry that is poised to reshape every sector of the global economy, we cannot fall back on old models. Rather, we must chart our own path and strive to out-innovate, out-learn, out-maneuver and out-pace the competition on the way We walk the walk on diversity. We’re a brilliant and eclectic mix of ethnicities, religions, industry experiences, sexual orientations, generations and more – and that’s by design. We see diverse perspectives as a core competitive advantage Integrity is essential. We believe in doing things well – and doing them right. Integrity is a core value here: you’ll see it embodied in our staff, our management approach and growing social impact work (we have a VP devoted to it). You’ll also see it embodied in the way we manage people and our HR issues: we expect employees and managers to deal with issues directly, immediately and with the utmost respect for each other and for the Company We are owners. Strong managers enable and empower their teams to figure out how to solve problems. You will be no exception, and will have the ownership, accountability and autonomy needed to be truly creative Position Title: Tech Lead Experience Required: 6-10 years Job Description Essential Duties & Responsibilities: Research, design and development of next generation Applications supporting billions of transactions and data. Design and development of cloud-based solutions with extensive hand-on experience on Big Data, distributed programming, ETL workflows and orchestration tools. Design and development of microservices in Java / J2EE, Node JS, with experience in containerization using Docker, Kubernetes Focus should be on developing cloud native applications utilizing cloud services. Work with product managers/owners & internal as well as external customers following Agile methodology Practice rapid iterative product development to mature promising concepts into successful products. Execute with a sense of urgency to drive ideas into products through the innovation life-cycle, , demo/evangelize. Should be experienced in using GenAI for faster development Skills Required Proven experience of development of high performing, scalable cloud applications using various cloud development stacks & services. Proven experience of Containers, GCP, AWS Cloud platforms. Deep skills in Java / Python / Node JS / SQL / PLSQL Working experience with Spring boot, ORM, JPA, Transaction Management, Concurrency, Design Patterns. Good Understanding of NoSQL databases like MongoDB. Experience on workflow and orchestration tools like NiFi, Airflow would be big plus. Deep understanding of best design and software engineering practices design principles and patterns, unit testing, performance engineering. Good understanding Distributed Architecture, plug-in and APIs. Prior Experience with Security, Cloud, and Container Security possess great advantage. Hands-on experience in building applications on various platforms, with deep focus on usability, performance and integration with downstream REST Web services. Exposure to Generative AI models, prompt engineering, or integration with APIs like OpenAI, Cohere, or Google Gemini Qualifications/Requirements B. Tech/Masters in Computer Science/Engineering, Electrical/Electronic Engineering. Aeris may conduct background checks to verify the information provided in your application and assess your suitability for the role. The scope and type of checks will comply with the applicable laws and regulations of the country where the position is based. Additional detail will be provided via the formal application process. Aeris walks the walk on diversity. We’re a brilliant mix of varying ethnicities, religions, cultures, sexual orientations, gender identities, ages and professional/personal/military experiences – and that’s by design. Diverse perspectives are essential to our culture, innovative process and competitive edge. Aeris is proud to be an equal opportunity employer.
Posted 3 days ago
3.0 - 7.0 years
0 Lacs
pune, maharashtra
On-site
Agivant is seeking a talented and passionate Senior Data Engineer to join our growing data team. In this role, you will play a key part in building and scaling our data infrastructure, enabling data-driven decision-making across the organization. You will be responsible for designing, developing, and maintaining efficient and reliable data pipelines for both ELT (Extract, Load, Transform) and ETL (Extract, Transform, Load) processes. Responsibilities: Design, develop, and maintain robust and scalable data pipelines for ELT and ETL processes, ensuring data accuracy, completeness, and timeliness. Work with stakeholders to understand data requirements and translate them into efficient data models and pipelines. Build and optimize data pipelines using a variety of technologies, including Elastic Search, AWS S3, Snowflake, and NFS. Develop and maintain data warehouse schemas and ETL/ELT processes to support business intelligence and analytics needs. Implement data quality checks and monitoring to ensure data integrity and identify potential issues. Collaborate with data scientists and analysts to ensure data accessibility and usability for various analytical purposes. Stay current with industry best practices, CI/CD/DevSecFinOps, Scrum, and emerging technologies in data engineering. Contribute to the development and enhancement of our data warehouse architecture. Requirements: - Bachelor's degree in Computer Science, Engineering, or a related field. - 5+ years of experience as a Data Engineer with a strong focus on ELT/ETL processes. - At least 3+ years of experience in Snowflake data warehousing technologies. - At least 3+ years of experience in creating and maintaining Airflow ETL pipelines. - Minimum 3+ years of professional level experience with Python languages for data manipulation and automation. - Working experience with Elastic Search and its application in data pipelines. - Proficiency in SQL and experience with data modeling techniques. - Strong understanding of cloud-based data storage solutions such as AWS S3. - Experience working with NFS and other file storage systems. - Excellent problem-solving and analytical skills. - Strong communication and collaboration skills.,
Posted 3 days ago
10.0 - 14.0 years
0 Lacs
kolkata, west bengal
On-site
As a Senior Machine Learning Engineer with over 10 years of experience, you will play a crucial role in designing, building, and deploying scalable machine learning systems in production. In this role, you will collaborate closely with data scientists to operationalize models, take ownership of ML pipelines from end to end, and enhance the reliability, automation, and performance of our ML infrastructure. Your primary responsibilities will include designing and constructing robust ML pipelines and services for training, validation, and model deployment. You will work in collaboration with various stakeholders such as data scientists, solution architects, and DevOps engineers to ensure alignment with project goals and requirements. Additionally, you will be responsible for ensuring cloud integration compatibility with AWS and Azure, building reusable infrastructure components following best practices in DevOps and MLOps, and adhering to security standards and regulatory compliance. To excel in this role, you should possess strong programming skills in Python, have deep experience with ML frameworks such as TensorFlow, PyTorch, and Scikit-learn, and be proficient in MLOps tools like MLflow, Airflow, TFX, Kubeflow, or BentoML. Experience in deploying models using Docker and Kubernetes, familiarity with cloud platforms and ML services, and proficiency in data engineering tools are essential for success in this position. Additionally, knowledge of CI/CD, version control, and infrastructure as code along with experience in monitoring/logging tools will be advantageous. Good-to-have skills include experience with feature stores and experiment tracking platforms, knowledge of edge/embedded ML, model quantization, and optimization, as well as familiarity with model governance, security, and compliance in ML systems. Exposure to on-device ML or streaming ML use cases and experience in leading cross-functional initiatives or mentoring junior engineers will also be beneficial. Joining Ericsson will provide you with an exceptional opportunity to leverage your skills and creativity to address some of the world's toughest challenges. You will be part of a diverse team of innovators who are committed to pushing the boundaries of innovation and crafting groundbreaking solutions. As a member of this team, you will be challenged to think beyond conventional limits and contribute to shaping the future of technology.,
Posted 3 days ago
3.0 - 7.0 years
0 Lacs
chandigarh
On-site
You should have at least 2-7 years of experience in software engineering and ML development. Proficiency in Python and ML libraries like Scikit-learn, TensorFlow, or PyTorch is essential. Your responsibilities will include building and evaluating models, data preprocessing, and feature engineering. It is important to have a good understanding of REST APIs, Docker, Git, and CI/CD tools. A strong foundation in software engineering principles such as data structures, algorithms, and design patterns is required. Hands-on experience with MLOps platforms like MLflow, TFX, Airflow, and Kubeflow is preferred. Exposure to NLP, large language models (LLMs), or computer vision projects will be beneficial for this role. Experience with cloud platforms such as AWS, GCP, and Azure, as well as managed ML services, is a plus. Any contributions to open-source ML libraries or participation in ML competitions like Kaggle or DrivenData will be considered an advantage.,
Posted 3 days ago
6.0 - 10.0 years
0 Lacs
indore, madhya pradesh
On-site
You should have 6-8 years of hands-on experience with Big Data technologies such as pySpark (Data frame and SparkSQL), Hadoop, and Hive. Additionally, you should possess good hands-on experience with python and Bash Scripts, along with a solid understanding of SQL and data warehouse concepts. Strong analytical, problem-solving, data analysis, and research skills are crucial for this role. It is essential to have a demonstrable ability to think creatively and independently, beyond relying solely on readily available tools. Excellent communication, presentation, and interpersonal skills are a must for effective collaboration within the team. Hands-on experience with Cloud Platform provided Big Data technologies like IAM, Glue, EMR, RedShift, S3, and Kinesis is required. Experience in orchestrating with Airflow and any job scheduler is highly beneficial. Familiarity with migrating workloads from on-premise to cloud and cloud to cloud migrations is also desired. In this role, you will be responsible for developing efficient ETL pipelines based on business requirements while adhering to development standards and best practices. Integration testing of different pipelines in AWS environment and providing estimates for development, testing, and deployments on various environments will be part of your responsibilities. Participation in code peer reviews to ensure compliance with best practices is essential. Creating cost-effective AWS pipelines using necessary AWS services like S3, IAM, Glue, EMR, Redshift, etc., is a key aspect of this position. Your experience should range from 6 to 8 years in relevant fields. The job reference number for this position is 13024.,
Posted 3 days ago
7.0 - 11.0 years
0 Lacs
maharashtra
On-site
As a skilled Snowflake Developer with over 7 years of experience, you will be responsible for designing, developing, and optimizing Snowflake data solutions. Your expertise in Snowflake SQL, ETL/ELT pipelines, and cloud data integration will be crucial in building scalable data warehouses, implementing efficient data models, and ensuring high-performance data processing in Snowflake. Your key responsibilities will include: - Designing and developing Snowflake databases, schemas, tables, and views following best practices. - Writing complex SQL queries, stored procedures, and UDFs for data transformation. - Optimizing query performance using clustering, partitioning, and materialized views. - Implementing Snowflake features such as Time Travel, Zero-Copy Cloning, Streams & Tasks. - Building and maintaining ETL/ELT pipelines using Snowflake, Snowpark, Python, or Spark. - Integrating Snowflake with cloud storage (S3, Blob) and data ingestion tools (Snowpipe). - Developing CDC (Change Data Capture) and real-time data processing solutions. - Designing star schema, snowflake schema, and data vault models in Snowflake. - Implementing data sharing, secure views, and dynamic data masking. - Ensuring data quality, consistency, and governance across Snowflake environments. - Monitoring and optimizing Snowflake warehouse performance (scaling, caching, resource usage). - Troubleshooting data pipeline failures, latency issues, and query bottlenecks. - Collaborating with data analysts, BI teams, and business stakeholders to deliver data solutions. - Documenting data flows, architecture, and technical specifications. - Mentoring junior developers on Snowflake best practices. Required Skills & Qualifications: - 7+ years in database development, data warehousing, or ETL. - 4+ years of hands-on Snowflake development experience. - Strong SQL or Python skills for data processing. - Experience with Snowflake utilities (SnowSQL, Snowsight, Snowpark). - Knowledge of cloud platforms (AWS/Azure) and data integration tools (Coalesce, Airflow, DBT). - Certifications: SnowPro Core Certification (preferred). Preferred Skills: - Familiarity with data governance and metadata management. - Familiarity with DBT, Airflow, SSIS & IICS. - Knowledge of CI/CD pipelines (Azure DevOps).,
Posted 3 days ago
6.0 - 10.0 years
0 Lacs
jaipur, rajasthan
On-site
As an AI / ML Engineer, you will be responsible for utilizing your expertise in the field of Artificial Intelligence and Machine Learning to develop innovative solutions. You should hold a Bachelor's or Master's degree in Computer Science, Engineering, Data Science, AI/ML, Mathematics, or a related field. With a minimum of 6 years of experience in AI/ML, you are expected to demonstrate proficiency in Python and various ML libraries such as scikit-learn, XGBoost, pandas, NumPy, matplotlib, and seaborn. In this role, you will need a strong understanding of machine learning algorithms and deep learning architectures including CNNs, RNNs, and Transformers. Hands-on experience with TensorFlow, PyTorch, or Keras is essential. You should also have expertise in data preprocessing, feature selection, exploratory data analysis (EDA), and model interpretability. Additionally, familiarity with API development and deploying models using frameworks like Flask, FastAPI, or similar tools is required. Experience with MLOps tools such as MLflow, Kubeflow, DVC, and Airflow will be beneficial. Knowledge of cloud platforms like AWS (SageMaker, S3, Lambda), GCP (Vertex AI), or Azure ML is preferred. Proficiency in version control using Git, CI/CD processes, and containerization with Docker is essential for this role. Bonus skills that would be advantageous include familiarity with NLP frameworks (e.g., spaCy, NLTK, Hugging Face Transformers), Computer Vision experience using OpenCV or YOLO/Detectron, and knowledge of Reinforcement Learning or Generative AI (GANs, LLMs). Experience with vector databases such as Pinecone or Weaviate, as well as LangChain for AI agent building, is a plus. Familiarity with data labeling platforms and annotation workflows will also be beneficial. In addition to technical skills, you should possess soft skills such as an analytical mindset, strong problem-solving abilities, effective communication, and collaboration skills. The ability to work independently in a fast-paced, agile environment is crucial. A passion for AI/ML and a proactive approach to staying updated with the latest developments in the field are highly desirable for this role.,
Posted 3 days ago
5.0 - 9.0 years
0 Lacs
kolkata, west bengal
On-site
Genpact is a global professional services and solutions firm that is dedicated to delivering outcomes that shape the future. With a team of over 125,000 professionals across more than 30 countries, we are motivated by curiosity, entrepreneurial agility, and the desire to create lasting value for our clients. Our purpose is the relentless pursuit of a world that works better for people, and we serve and transform leading enterprises, including the Fortune Global 500, leveraging our deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. We are currently looking for a Principal Consultant - Snowflake Sr. Data Engineer (Snowflake + Python/Pyspark) to join our team! As a Snowflake Sr. Data Engineer, you will be responsible for providing technical direction and leading a group of developers to address a common goal. You should have experience in the IT industry and be proficient in building productionized data ingestion and processing data pipelines in Snowflake. Additionally, you should be well-versed in data warehousing concepts and have expertise in Snowflake features and integration with other data processing tools. Experience with Python programming and Pyspark for data analysis is essential for this role. Key Responsibilities: - Work on requirement gathering, analysis, designing, development, and deployment - Write SQL queries against Snowflake and develop scripts for Extract, Load, and Transform data - Understand Data Warehouse concepts and Snowflake Architecture - Hands-on experience with Snowflake utilities such as SnowSQL, SnowPipe, tables, Tasks, Streams, and more - Experience with Snowflake AWS data services or Azure data services - Proficiency in Python programming language and knowledge of packages like pandas, NumPy, etc. - Design and develop efficient ETL jobs using Python and Pyspark - Use Python and Pyspark for data cleaning, pre-processing, and transformation tasks - Implement CDC or SCD type 2 and build data ingestion pipelines - Work with workflow management tools for scheduling and managing ETL jobs Qualifications: - B.E./ Masters in Computer Science, Information Technology, or Computer Engineering - Relevant years of experience as a Snowflake Sr. Data Engineer - Skills in Snowflake, Python/Pyspark, AWS/Azure, ETL concepts, Airflow, or any orchestration tools, Data Warehousing concepts If you are passionate about leveraging your skills to drive innovative solutions and create value in a dynamic environment, we encourage you to apply for this exciting opportunity. Join us in shaping the future and making a difference!,
Posted 3 days ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
Remote
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. A typical day involves collaborating with team members to understand project needs, developing innovative solutions, and ensuring that applications are aligned with business objectives. You will engage in problem-solving discussions, contribute to the overall project strategy, and continuously refine your skills to enhance application performance and user experience. Roles & Responsibilities: The Offshore Data Engineer plays a critical role in designing, building, and maintaining scalable data pipelines and infrastructure to support business intelligence, analytics, and machine learning initiatives. Working closely with onshore data architects and analysts, this role ensures high data quality, performance, and reliability across distributed systems. The engineer is expected to demonstrate technical proficiency, proactive problem-solving, and strong collaboration in a remote environment. -Design and develop robust ETL/ELT pipelines to ingest, transform, and load data from diverse sources. -Collaborate with onshore teams to understand business requirements and translate them into scalable data solutions. -Optimize data workflows through automation, parallel processing, and performance tuning. -Maintain and enhance data infrastructure including data lakes, data warehouses, and cloud platforms (AWS, Azure, GCP). -Ensure data integrity and consistency through validation, monitoring, and exception handling. -Contribute to data modeling efforts for both transactional and analytical use cases. -Deliver clean, well-documented datasets for reporting, analytics, and machine learning. -Proactively identify opportunities for cost optimization, governance, and process automation. Professional & Technical Skills: - Programming & Scripting: Proficiency in Databricks with SQL and Python for data manipulation and pipeline development. - Big Data Technologies: Experience with Spark, Hadoop, or similar distributed processing frameworks. -Workflow Orchestration: Hands-on experience with Airflow or equivalent scheduling tools. -Cloud Platforms: Strong working knowledge of cloud-native services (AWS Glue, Azure Data Factory, GCP Dataflow). -Data Modeling: Ability to design normalized and denormalized schemas for various use cases. -ETL/ELT Development: Proven experience in building scalable and maintainable data pipelines. -Monitoring & Validation: Familiarity with data quality frameworks and exception handling mechanisms. Good To have Skills -DevOps & CI/CD: Exposure to containerization (Docker), version control (Git), and deployment pipelines. -Data Governance: Understanding of metadata management, lineage tracking, and compliance standards. -Visualization Tools: Basic knowledge of BI tools like Power BI, Tableau, or Looker. -Machine Learning Support: Experience preparing datasets for ML models and feature engineering. Additional Information: - The candidate should have minimum 3 years of experience in Databricks Unified Data Analytics Platform. - This position is based at our Chennai office. - A 15 years full time education is required., 15 years full time education
Posted 3 days ago
1.0 - 4.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
???? About Es Magico Es Magico is an AI-first enterprise transformation organisation that goes beyond consulting we deliver scalable execution across sectors such as BFSI, Healthcare, Entertainment, and Education. With offices in Mumbai and Bengaluru, our mission is to augment the human workforce by deploying bespoke AI employees across business functions, innovating swiftly and executing with trust. We also partner with early-stage startups as a venture builder, transforming 0 ? 1 ideas into AI-native, scalable products. ???? Role: MLOps Engineer ??? Location: Bengaluru (Hybrid) ??? Experience: 14 years ??? Joining: Immediate ???? Key Responsibilities Design, develop, and maintain scalable ML pipelines for training, testing, and deployment. Automate model deployment, monitoring, and version control across dev/staging/prod environments. Integrate CI/CD pipelines for ML models using tools like MLflow, Kubeflow, Airflow, etc. Manage containerized workloads using Docker and orchestrate with Kubernetes or GKE. Collaborate closely with data scientists and product teams to optimize ML model lifecycle. Monitor performance and reliability of deployed models and troubleshoot issues as needed. ????? Technical Skills Experience with MLOps frameworks: MLflow, TFX, Kubeflow, or SageMaker Pipelines. Proficient in Python and common ML libraries (scikit-learn, pandas, etc.). Solid understanding of CI/CD practices and tools (e.g., GitHub Actions, Jenkins, Cloud Build). Familiar with Docker, Kubernetes, and Google Cloud Platform (GCP). Comfortable with data pipeline tools like Airflow, Prefect, or equivalent. ???? Preferred Qualifications 14 years of experience in MLOps, ML engineering, or DevOps with ML workflows. Prior experience with model monitoring, drift detection, and automated retraining. Exposure to data versioning tools like DVC or Delta Lake is a plus. GCP certifications or working knowledge of Vertex AI is a strong advantage. ???? How to Apply Send your resume to [HIDDEN TEXT] with the subject line: Application MLOps Engineer. Show more Show less
Posted 3 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough