Jobs
Interviews

491 Data Pipeline Jobs - Page 15

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

8.0 - 13.0 years

20 - 32 Lacs

Bengaluru

Hybrid

Job Title: Senior Data Engineer Experience: 9+ Years Location: Whitefield, Bangalore Notice Period: Serving or Immediate joiners. Role & Responsibilities: Design and implement scalable data pipelines for ingesting, transforming, and loading data from diverse sources and tools. Develop robust data models to support analytical and reporting requirements. Automate data engineering processes using appropriate scripting languages and frameworks. Collaborate with engineers, process managers, and data scientists to gather requirements and deliver effective data solutions. Serve as a liaison between engineering and business teams on all data-related initiatives. Automate monitoring and alerting for data pipelines, products, and dashboards; provide support for issue resolution including on-call responsibilities. Write optimized and modular SQL queries, including view and table creation as required. Define and implement best practices for data validation, ensuring alignment with enterprise standards. Manage QA data environments, including test data creation and maintenance. Qualifications: 9+ years of experience in data engineering or a related field. Proven experience with Agile software development practices. Strong SQL skills and experience working with both RDBMS and NoSQL databases. Hands-on experience with cloud-based data warehousing platforms such as Snowflake and Amazon Redshift . Proficiency with cloud technologies, preferably AWS . Deep knowledge of data modeling , data warehousing , and data lake concepts. Practical experience with ETL/ELT tools and frameworks. 5+ years of experience in application development using Python , SQL , Scala , or Java . Experience in working with real-time data streaming and associated platforms. Note: The professional should be based out of Bangalore, as one technical round has to be taken F2F from Bellandur, Bangalore office.

Posted 1 month ago

Apply

5.0 - 10.0 years

10 - 20 Lacs

Bengaluru

Remote

Hi Candidates, We have Job openings in one of our MNC Company -Remote-C2h Please apply here or share updated resume to chandrakala.c@i-q.co AWS Data engineer JD: Data Engineer JD The requirements for the candidate: Data Engineer with a minimum of 3 - 5+ years of experience of data engineering experience. The role will require deep knowledge of data engineering techniques to create data pipelines and build data assets. At least 4+ years of Strong hands on programming experience with Pyspark / Python / Boto3 including Python Frameworks, libraries according to python best practices. Strong experience in code optimisation using spark SQL and pyspark. Understanding of Code versioning ,Git repository , JFrog Artifactory. AWS Architecture knowledge specially on S3, EC2, Lambda, Redshift, CloudFormation etc and able to explain benefits of each Code Refactorization of Legacy Codebase: Clean, modernize, improve readability and maintainability. Unit Tests/TDD: Write tests before code, ensure functionality, catch bugs early. Fixing Difficult Bugs: Debug complex code, isolate issues, resolve performance, concurrency, or logic flaws.Role & responsibilities

Posted 1 month ago

Apply

4.0 - 8.0 years

15 - 18 Lacs

Lucknow

Work from Office

Urgent Hiring for Data Engineers Job Location: Lucknow (On-Site) Exp - 4+ yrs (relevant) Salary range: 15 lpa - 18 lpa No.of open positions : 10 Immediate joiners are only required Job Overview: We are seeking experienced Data Engineer to join our team. As a Data Engineer, you will be responsible for designing, building, and maintaining large-scale data systems and pipelines. You will work closely with our data science team to prepare data for prescriptive and predictive modeling, ensuring high-quality data outputs. Key Responsibilities: - Analyze and organize raw data from various sources - Build and maintain data systems and pipelines - Prepare data for prescriptive and predictive modeling - Combine raw information from different sources to generate valuable insights - Enhance data quality and reliability Requirements : - 4+ years of experience as a Data Engineer or in a similar role - Technical expertise in data models, data mining, and segmentation techniques - Experience with Cloud data technologies (Azure Data Factory, Databricks) - Knowledge of CI/CD pipelines and Jenkins - Strong programming skills in Python - Hands-on experience with SQL databases

Posted 1 month ago

Apply

4.0 - 8.0 years

6 - 12 Lacs

Bengaluru

Work from Office

4+yrs Data science,ML frameworks,MLOps, Python, Data engineering,Cloud platforms,Edge computing,Data manipulation libraries,Model development,Object,Experiment, design,Ab testing. Reach me at mailcv108@gmail.com or WhatsApp me at +91 9611702105

Posted 1 month ago

Apply

8.0 - 10.0 years

9 - 13 Lacs

Bengaluru

Work from Office

What you’ll be doing: Assist in developing machine learning models based on project requirements Work with datasets by preprocessing, selecting appropriate data representations, and ensuring data quality. Performing statistical analysis and fine-tuning using test results. Support training and retraining of ML systems as needed. Help build data pipelines for collecting and processing data efficiently. Follow coding and quality standards while developing AI/ML solutions Contribute to frameworks that help operationalize AI models What we seek in you: 8+ years of experience in IT Industry Strong on programming languages like Python One cloud hands-on experience (GCP preferred) Experience working with Dockers Environments managing (e.g venv, pip, poetry, etc.) Experience with orchestrators like Vertex AI pipelines, Airflow, etc Understanding of full ML Cycle end-to-end Data engineering, Feature Engineering techniques Experience with ML modelling and evaluation metrics Experience with Tensorflow, Pytorch or another framework Experience with Models monitoring Advance SQL knowledge Aware of Streaming concepts like Windowing, Late arrival, Triggers etc Storage: CloudSQL, Cloud Storage, Cloud Bigtable, Bigquery, Cloud Spanner, Cloud DataStore, Vector database Ingest: Pub/Sub, Cloud Functions, AppEngine, Kubernetes Engine, Kafka, Micro services Schedule: Cloud Composer, Airflow Processing: Cloud Dataproc, Cloud Dataflow, Apache Spark, Apache Flink CI/CD: Bitbucket+Jenkins / Gitlab, Infrastructure as a tool: Terraform Life at Next: At our core, we're driven by the mission of tailoring growth for our customers by enabling them to transform their aspirations into tangible outcomes. We're dedicated to empowering them to shape their futures and achieve ambitious goals. To fulfil this commitment, we foster a culture defined by agility, innovation, and an unwavering commitment to progress. Our organizational framework is both streamlined and vibrant, characterized by a hands-on leadership style that prioritizes results and fosters growth. Perks of working with us: Clear objectives to ensure alignment with our mission, fostering your meaningful contribution. Abundant opportunities for engagement with customers, product managers, and leadership. You'll be guided by progressive paths while receiving insightful guidance from managers through ongoing feedforward sessions. Cultivate and leverage robust connections within diverse communities of interest. Choose your mentor to navigate your current endeavors and steer your future trajectory. Embrace continuous learning and upskilling opportunities through Nexversity. Enjoy the flexibility to explore various functions, develop new skills, and adapt to emerging technologies. Embrace a hybrid work model promoting work-life balance. Access comprehensive family health insurance coverage, prioritizing the well-being of your loved ones. Embark on accelerated career paths to actualize your professional aspirations. Who we are? We enable high growth enterprises build hyper personalized solutions to transform their vision into reality. With a keen eye for detail, we apply creativity, embrace new technology and harness the power of data and AI to co-create solutions tailored made to meet unique needs for our customers. Join our passionate team and tailor your growth with us!

Posted 1 month ago

Apply

8.0 - 13.0 years

40 - 65 Lacs

Bengaluru

Work from Office

About the team When 5% of Indian households shop with us, its important to build resilient systems to manage millions of orders every day. We’ve done this – with zero downtime! Sounds impossible? Well, that’s the kind of Engineering muscle that has helped Meesho become the e-commerce giant that it is today. We value speed over perfection, and see failures as opportunities to become better. We’ve taken steps to inculcate a strong ‘Founder’s Mindset’ across our engineering teams, making us grow and move fast. We place special emphasis on the continuous growth of each team member - and we do this with regular 1-1s and open communication. As Engineering Manager, you will be part of self-starters who thrive on teamwork and constructive feedback. We know how to party as hard as we work! If we aren’t building unparalleled tech solutions, you can find us debating the plot points of our favourite books and games – or even gossipping over chai. So, if a day filled with building impactful solutions with a fun team sounds appealing to you, join us. About the role We are looking for a seasoned Engineering Manager well-versed with emerging technologies to join our team. As an Engineering Manager, you will ensure consistency and quality by shaping the right strategies. You will keep an eye on all engineering projects and ensure all duties are fulfilled. You will analyse other employees’ tasks and carry on collaborations effectively. You will also transform newbies into experts and build reports on the progress of all projects What you will do Design tasks for other engineers, keeping Meesho’s guidelines and standards in mind Keep a close look on various projects and monitor the progress Drive excellence in quality across the organisation and solutioning of product problems Collaborate with the sales and design teams to create new products Manage engineers and take ownership of the project while ensuring product scalability Conduct regular meetings to plan and develop reports on the progress of projects What you will need Bachelor's / Master’s in computer science At least 8+ years of professional experience At least 4+ years’ experience in managing software development teams Experience in building large-scale distributed Systems Experience in Scalable platforms Expertise in Java/Python/Go-Lang and multithreading Good understanding on Spark and internals Deep understanding of transactional and NoSQL DBs Deep understanding of Messaging systems – Kafka Good experience on cloud infrastructure - AWS preferably Ability to drive sprints and OKRs with good stakeholder management experience. Exceptional team managing skills Experience in managing a team of 4-5 junior engineers Good understanding on Streaming and real time pipelines Good understanding on Data modelling concepts, Data Quality tools Good knowledge in Business Intelligence tools Metabase, Superset, Tableau etc. Good to have knowledge - Trino, Flink, Presto, Druid, Pinot etc. Good to have knowledge - Data pipeline building

Posted 1 month ago

Apply

0.0 - 2.0 years

3 - 6 Lacs

Noida

Work from Office

Required Skills: Absolute clarity in OOP fundamentals and Data-Structures Must have hands-on experience in Data Structure like List, Dict, Set, Strings, Lambda, etc Must have hands-on experience in working with Spark, Hadoop Excellent written and verbal communication and presentation skills Roles and responsibilities: Maintain and improve existing projects Collaborate with the technical team to develop new features and troubleshoot issues Lead projects to understand the requirements and distribute work to the technical team Follow the project/task timelines and quality.

Posted 1 month ago

Apply

6.0 - 10.0 years

8 - 18 Lacs

Bengaluru

Work from Office

Looking for a Support Engineer to join our dynamic AI & Data Team. This is an exciting opportunity to work on real-time enterprise projects, engage with stakeholders, & gain hands-on exposure to the latest data technologies & documentation practices.

Posted 1 month ago

Apply

4.0 - 9.0 years

18 - 32 Lacs

Kolkata, Hyderabad, Bengaluru

Work from Office

Description - External Genpact (NYSE: G) is a global professional services and solutions firm delivering outcomes that shape the future. Our 125,000+ people across 30+ countries are driven by our innate curiosity, entrepreneurial agility, and desire to create lasting value for clients. Powered by our purpose the relentless pursuit of a world that works better for people we serve and transform leading enterprises, including the Fortune Global 500, with our deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. Inviting applications for the role of AIML Engineer! In this role, we are looking for candidates who have relevant years of experience in designing and developing machine learning and deep learning system. Who have professional software development experience. Hands on running machine learning tests and experiments. Implementing appropriate ML algorithms engineers. Responsibilities • Drive the vision for modern data and analytics platform to deliver well architected and engineered data and analytics products leveraging cloud tech stack and third-party products • Close the gap between ML research and production to create ground-breaking new products, features and solve problems for our customers • Design, develop, test, and deploy data pipelines, machine learning infrastructure and client-facing products and services • Build and implement machine learning models and prototype solutions for proof-of-concept • Scale existing ML models into production on a variety of cloud platforms • Analyze and resolve architectural problems, working closely with engineering, data science and operations teams Qualifications we seek in you! Minimum Qualifications / Skills • Bachelor's degree in computer science engineering, information technology or BSc in Computer Science, Mathematics or similar field • Master’s degree is a plus • Integration – APIs, micro-services and ETL/ELT patterns • DevOps (Good to have) – Ansible, Jenkins, ELK • Containerization – Docker, Kubernetes etc • Orchestration – Airflow, Step Functions, Ctrl M etc • Languages and scripting: Python, Scala Java etc • Cloud Services - AWS, GCP, Azure and Cloud Native • Analytics and ML tooling – Sagemaker, ML Studio • Execution Paradigm – low latency/Streaming, batch Preferred Qualifications/ Skills • Data platforms – Big Data (Hadoop, Spark, Hive, Kafka etc.) and Data Warehouse (Teradata, Redshift, BigQuery, Snowflake etc.) • Visualization Tools - PowerBI, Tableau Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Get to know us at genpact.com and on LinkedIn, X, YouTube, and Facebook. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training.

Posted 1 month ago

Apply

5.0 - 10.0 years

17 - 30 Lacs

Noida

Remote

JD - Required skills: 5+ years of industry experience in the field of Data Engineering support and enhancement. Proficient in Google Cloud Platform (GCP) services such as Dataflow, BigQuery, Cloud Storage and Pub/Sub. Strong understanding of data pipeline architectures and ETL processes. Experience with Python programming language in terms of data processing. Knowledge of SQL and experience with relational databases. Familiarity with version control systems like Git.Ability to analyze, troubleshoot, and complex data pipeline issues. Software engineering experience in optimizing data pipelines to improve performance and reliability. Continuously optimize data pipeline efficiency and reduce operational costs and reduce number of issues/failures. Automate repetitive tasks in data processing and management Experience in monitoring and alerting for Data Pipelines. Continuously improve data pipeline reliability through analysis and testing Perform SLA-oriented monitoring for critical pipelines and provide suggestions as well implement post business approval for SLA adherence if needed. Monitor performance and reliability of GCP data pipelines, Informatica ETL workflows, MDM and Control-M jobs. Maintain infrastructure reliability for GCP data pipelines, Informatica ETL workflows, MDM and Control-M jobs. Conduct post-incident reviews and implement improvements for data pipelines. Develop and maintain documentation for data pipeline systems and processes. Excellent communication and documentation skills. Strong problem-solving and analytical skills. Open to work in a 24X7 shift.

Posted 1 month ago

Apply

6.0 - 11.0 years

8 - 12 Lacs

Pune

Hybrid

skills: Apache with Airflow Superset Work Model: Hybrid Job Description:- Should have 6+ years of exp in Apache Superset for data exploration and visualisation exp in Apache Beam and Flink for data pipeline /ETL exp in Apache Airfow DAGS (collection of executions task) Good to have knowledge in any SQL/Mongo DB/Postgress

Posted 1 month ago

Apply

1.0 - 4.0 years

10 - 14 Lacs

Pune

Work from Office

Overview Design, develop, and maintain data pipelines and ETL/ELT processes using PySpark/Databricks/bigquery/Airflow/composer. Optimize performance for large datasets through techniques such as partitioning, indexing, and Spark optimization. Collaborate with cross-functional teams to resolve technical issues and gather requirements. Responsibilities Ensure data quality and integrity through data validation and cleansing processes. Analyze existing SQL queries, functions, and stored procedures for performance improvements. Develop database routines like procedures, functions, and views/MV. Participate in data migration projects and understand technologies like Delta Lake/warehouse/bigquery. Debug and solve complex problems in data pipelines and processes. Qualifications Bachelor’s degree in computer science, Engineering, or a related field. Strong understanding of distributed data processing platforms like Databricks and BigQuery. Proficiency in Python, PySpark, and SQL programming languages. Experience with performance optimization for large datasets. Strong debugging and problem-solving skills. Fundamental knowledge of cloud services, preferably Azure or GCP. Excellent communication and teamwork skills. Nice to Have: Experience in data migration projects. Understanding of technologies like Delta Lake/warehouse. What we offer you Transparent compensation schemes and comprehensive employee benefits, tailored to your location, ensuring your financial security, health, and overall wellbeing. Flexible working arrangements, advanced technology, and collaborative workspaces. A culture of high performance and innovation where we experiment with new ideas and take responsibility for achieving results. A global network of talented colleagues, who inspire, support, and share their expertise to innovate and deliver for our clients. Global Orientation program to kickstart your journey, followed by access to our Learning@MSCI platform, LinkedIn Learning Pro and tailored learning opportunities for ongoing skills development. Multi-directional career paths that offer professional growth and development through new challenges, internal mobility and expanded roles. We actively nurture an environment that builds a sense of inclusion belonging and connection, including eight Employee Resource Groups. All Abilities, Asian Support Network, Black Leadership Network, Climate Action Network, Hola! MSCI, Pride & Allies, Women in Tech, and Women’s Leadership Forum. At MSCI we are passionate about what we do, and we are inspired by our purpose – to power better investment decisions. You’ll be part of an industry-leading network of creative, curious, and entrepreneurial pioneers. This is a space where you can challenge yourself, set new standards and perform beyond expectations for yourself, our clients, and our industry. MSCI is a leading provider of critical decision support tools and services for the global investment community. With over 50 years of expertise in research, data, and technology, we power better investment decisions by enabling clients to understand and analyze key drivers of risk and return and confidently build more effective portfolios. We create industry-leading research-enhanced solutions that clients use to gain insight into and improve transparency across the investment process. MSCI Inc. is an equal opportunity employer. It is the policy of the firm to ensure equal employment opportunity without discrimination or harassment on the basis of race, color, religion, creed, age, sex, gender, gender identity, sexual orientation, national origin, citizenship, disability, marital and civil partnership/union status, pregnancy (including unlawful discrimination on the basis of a legally protected parental leave), veteran status, or any other characteristic protected by law. MSCI is also committed to working with and providing reasonable accommodations to individuals with disabilities. If you are an individual with a disability and would like to request a reasonable accommodation for any part of the application process, please email Disability.Assistance@msci.com and indicate the specifics of the assistance needed. Please note, this e-mail is intended only for individuals who are requesting a reasonable workplace accommodation; it is not intended for other inquiries. To all recruitment agencies MSCI does not accept unsolicited CVs/Resumes. Please do not forward CVs/Resumes to any MSCI employee, location, or website. MSCI is not responsible for any fees related to unsolicited CVs/Resumes. Note on recruitment scams We are aware of recruitment scams where fraudsters impersonating MSCI personnel may try and elicit personal information from job seekers. Read our full note on careers.msci.com

Posted 1 month ago

Apply

8.0 - 12.0 years

10 - 20 Lacs

Chennai

Hybrid

Hi [Candidate Name], We are hiring for a Data Engineering role with a leading organization working on cutting-edge cloud and data solutions. If you're an experienced professional looking for your next challenge, this could be a great fit! Key Skills Required: Strong experience in Data Engineering and Cloud Data Pipelines Proficiency in at least 3 languages : Java, Python, Spark, Scala, SQL Hands-on with tools like Google BigQuery, Apache Kafka, Airflow, GCP Pub/Sub Knowledge of Microservices architecture , REST APIs , and DevOps tools (Docker, GitHub Actions, Terraform) Exposure to relational databases : MySQL, PostgreSQL, SQL Server Prior experience in onshore/offshore model is a plus If this sounds like a match for your profile, reply with your updated resume or apply directly. Looking forward to connecting! Best regards, Mahesh Babu M Senior Executive - Recruitment maheshbabu.muthukannan@sacha.solutions

Posted 1 month ago

Apply

4.0 - 8.0 years

12 - 20 Lacs

Bengaluru

Work from Office

Develop backend services using Python, FastAPI/Flask, integrate SQL databases, build Elasticsearch solutions, deploy to Azure/AWS, manage CI/CD, and mentor juniors. Optimize performance and ensure clean, scalable architecture Required Candidate profile 4–8 years of Python experience with strong backend skills, FastAPI/Flask, SQL, Elasticsearch, and Azure/AWS exposure.

Posted 2 months ago

Apply

10.0 - 16.0 years

27 - 37 Lacs

Hyderabad

Work from Office

Data Architect Microsoft Fabric, Snowflake & Modern Data Platforms Location: Hyderabad Employment Type: Full-Time Position Overview: We are seeking a seasoned Data Architect with strong consulting experience to lead the design and delivery of modern data solutions across global clients. This role emphasizes hands-on architecture and engineering using Microsoft Fabric and Snowflake, while also contributing to internal capability development and practice growth. The ideal candidate will bring deep expertise in data modeling, modern data architecture, and data engineering, with a passion for innovation and client impact. Key Responsibilities: Client Delivery & Architecture (75%) Serve as the lead architect for client engagements, designing scalable, secure, and high-performance data solutions using Microsoft Fabric and Snowflake. Apply modern data architecture principles including data lakehouse, ELT/ETL pipelines, and real-time streaming. Collaborate with cross-functional teams (data engineers, analysts, architects) to deliver end-to-end solutions. Translate business requirements into technical strategies with measurable outcomes. Ensure best practices in data governance, quality, and security are embedded in all solutions. Deliver scalable data modeling solutions for various use cases leveraging a modern data platform. Practice & Capability Development (25%) Contribute to the development of reusable assets, accelerators, and reference architectures. Support internal knowledge sharing and mentoring across the India-based consulting team. Stay current with emerging trends in data platforms, AI/ML integration, and cloud-native architectures. Collaborate with global teams to align on delivery standards and innovation initiatives. Qualifications: 10+ years of experience in data architecture and engineering, preferably in a consulting environment. Proven experience with Microsoft Fabric and Snowflake platforms. Strong skills in data modeling, data pipeline development, and performance optimization. Familiarity with Azure Synapse, Azure Data Factory, Power BI, and related Azure services. Excellent communication and stakeholder management skills. Experience working with global delivery teams and agile methodologies. Preferred Certifications: SnowPro Core Certification (preferred but not required) Microsoft Certified: Fabric Analytics Engineer Associate Microsoft Certified: Azure Solutions Architect Expert

Posted 2 months ago

Apply

3.0 - 5.0 years

3 - 8 Lacs

Bangalore Rural, Bengaluru

Work from Office

Job Title: Data Engineer (Mid-Level) Experience: 3 to 5 Years Location: Bangalore Department: Data Engineering / Analytics / IT Summary: entomo is an Equal Opportunity Employer. The company promotes and supports a diverse workforce at all levels across the Company. The Company ensures that its associates or potential hires, third-party support staff and suppliers are not discriminated against, directly or indirectly, as a result of their colour, creed, cast, race, nationality, ethnicity or national origin, marital status, pregnancy, age, disability, religion or similar philosophical belief, sexual orientation, gender or gender reassignment, etc We are looking for a skilled and experienced Data Engineer with 3 to 5 years of experience to design, build, and optimize scalable data pipelines and infrastructure. The ideal candidate will work closely with data scientists, analysts, and software engineers to ensure reliable and efficient data delivery throughout our data ecosystem. Key Responsibilities: Design, implement, and maintain robust data pipelines using ETL/ELT frameworks. Build and manage data warehousing solutions (e.g., Snowflake, Redshift, BigQuery). Optimize data systems for performance, scalability, and cost-efficiency. Ensure data quality, consistency, and integrity across various sources. Collaborate with cross-functional teams to integrate data from multiple business systems. Implement data governance, privacy, and security best practices. Monitor and troubleshoot data workflows and conduct root cause analysis on data-related issues. Automate data integration and validation processes using scripting languages (e.g., Python, SQL). Work with DevOps teams to deploy data solutions using CI/CD pipelines. Required Skills & Qualifications: Bachelor's or Masters degree in Computer Science, Engineering, Data Science, or a related field. 3 to 5 years of experience in data engineering or a similar role. Strong proficiency in SQL and at least one programming language (Python, Java, or Scala). Experience with cloud platforms (AWS, Azure, or GCP). Hands-on experience with data pipeline tools is added bonus(e.g., Apache Airflow, Luigi, DBT). Proficient in working with relational and NoSQL databases. Familiarity with big data tools (e.g., Spark, Hadoop) is a plus. Good understanding of data architecture, modeling, and warehousing principles. Excellent problem-solving and communication skills. Preferred Qualifications: Certifications in cloud platforms or data engineering tools. Experience with containerization (Docker, Kubernetes). Knowledge of real-time data processing tools (Kafka, Flink). Exposure to data privacy regulations (GDPR, HIPAA)

Posted 2 months ago

Apply

7.0 - 12.0 years

25 - 27 Lacs

Kolkata

Hybrid

7+ years property insurance experience with 5+ years’ experience in management of mid-level professional teams or similar leadership position with a focus on data and/or performance management. Extensive experience in applying and/or developing performance management metrics for claims organizations. Bachelor’s degree in computer science, data science, statistics, or a related field is preferred. Mastery level knowledge of data analysis tools such as Excel, Tableau or Power BI. Demonstrated expertise in Power BI creating reports and dashboards, including the ability to connect to various data sources, prepare and model data, and create visualizations. Expert knowledge of DAX for creating calculated columns and measures to meet report-specific requirements. Expert knowledge of Power Query for importing, transforming, and shaping data. Proficiency in SQL with the ability to write complex queries and optimize performance. Strong knowledge of ETL processes, data pipeline and automation a plus. Proficiency in managing tasks with Jira is an advantage. Strong analytical and problem-solving skills. Excellent attention to detail and the ability to work with large datasets. Effective communication skills, both written and verbal. Excellent visual communications and storytelling with data skills. Ability to work independently and collaborate in a team environment.

Posted 2 months ago

Apply

3.0 - 8.0 years

0 - 3 Lacs

Hyderabad

Work from Office

Job Overview: We are seeking a skilled and proactive Machine Learning Engineer to join our smart manufacturing initiative. You will play a pivotal role in building data pipelines, developing ML models for defect prediction, and implementing closed-loop control systems to improve production quality. Responsibilities: Data Engineering & Pipeline Support: Validate and ensure correct data flow from Influx DB/CDL to Smart box/Databricks platforms. Collaborate with data scientists to support model development through accurate data provisioning. Provide ongoing support in resolving data pipeline issues and performing ad-hoc data extractions. ML Model Development: Develop three distinct ML models to predict different types of defects using historical production data. Predict short-term outcomes (next 5 minutes) using techniques like artificial sampling and dimensionality reduction. Ensure high model performance: Accuracy 95%, Precision & Recall 80%. Extract and present feature importance to support model interpretability. Closed-loop Control Architecture: Implement end-to-end ML-driven automation to proactively correct machine settings based on model predictions. Key architecture components include: Real-time data ingestion from PLCs via Influx DB/CDL. Model deployment and inference on Smart box. Output pipeline to share actionable recommendations via PLC tags. Automated retraining pipeline in the cloud triggered by model drift or recommendation deviations. Qualifications: Proven experience with real-time data streaming from industrial systems (PLCs, Influx DB/CDL). Hands-on experience in building and deploying ML models in production. Strong understanding of data preprocessing, dimensionality reduction, and synthetic data techniques. Familiarity with cloud-based retraining workflows and model performance monitoring. Experience in smart manufacturing or predictive maintenance is a plus.

Posted 2 months ago

Apply

5.0 - 7.0 years

15 - 25 Lacs

Chennai

Work from Office

Job Summary: We are seeking a skilled Big Data Tester & Developer to design, develop, and validate data pipelines and applications on large-scale data platforms. You will work on data ingestion, transformation, and testing workflows using tools from the Hadoop ecosystem and modern data engineering stacks. Experience - 6-12 years Key Responsibilities: • Develop and test Big Data pipelines using Spark, Hive, Hadoop, and Kafka • Write and optimize PySpark/Scala code for data processing • Design test cases for data validation, quality, and integrity • Automate testing using Python/Java and tools like Apache Nifi, Airflow, or DBT • Collaborate with data engineers, analysts, and QA teams Key Skills: • Strong hands-on experience in Big Data tools: Spark, Hive, HDFS, Kafka • Proficient in PySpark, Scala, or Java • Experience in data testing, ETL validation, and data quality checks • Familiarity with SQL, NoSQL, and data lakes • Knowledge of CI/CD, Git, and automation frameworks We are looking for a skilled PostgreSQL Developer/DBA to design, implement, optimize, and maintain our PostgreSQL database systems. You will work closely with developers and data teams to ensure high performance, scalability, and data integrity. Experience - 6 to 12 years Key Responsibilities: • Develop complex SQL queries, stored procedures, and functions • Optimize query performance and database indexing • Manage backups, replication, and security • Monitor and tune database performance • Support schema design and data migrations Key Skills: • Strong hands-on experience with PostgreSQL • Proficient in SQL, PL/pgSQL scripting • Experience in performance tuning, query optimization, and indexing • Familiarity with logical replication, partitioning, and extensions • Exposure to tools like pgAdmin, psql, or PgBouncer

Posted 2 months ago

Apply

8.0 - 10.0 years

27 - 42 Lacs

Chennai

Work from Office

Job Summary: We are seeking a skilled and motivated Backend/Data Engineer with hands-on experience in MongoDB and Neo4j to design and implement data-driven applications. The ideal candidate will be responsible for building robust database systems, integrating complex graph and document-based data models, and collaborating with cross-functional teams. Experience - 6- 12 years Key Responsibilities: • Design, implement, and optimize document-based databases using MongoDB. • Model and manage connected data using Neo4j (Cypher query language). • Develop RESTful APIs and data services to serve and manipulate data stored in MongoDB and Neo4j. • Implement data pipelines for data ingestion, transformation, and storage. • Optimize database performance and ensure data integrity and security. • Collaborate with frontend developers, data scientists, and product managers. • Maintain documentation and support for database solutions. Required Skills: • Strong proficiency in MongoDB: schema design, indexing, aggregation framework. • Solid experience with Neo4j: graph modeling, Cypher queries, performance tuning. • Programming proficiency in Python, Node.js, or Java. • Familiarity with REST APIs, GraphQL, or gRPC. • Experience with data modeling (both document and graph models). • Knowledge of data security, backup, and recovery techniques. Preferred Skills: • Experience with Mongoose, Spring Data MongoDB, or Neo4j-OGM. • Familiarity with data visualization tools (e.g., Neo4j Bloom). • Experience with Docker, Kubernetes, or other DevOps tools. • Exposure to other databases (e.g., PostgreSQL, Redis).

Posted 2 months ago

Apply

4.0 - 9.0 years

20 - 30 Lacs

Gurugram

Hybrid

Who are we? Falcon a Series-A funded cloud-native, AI-first banking technology & processing platform that helps banks, NBFCs, and PPIs quickly and affordably launch next-gen financial products, such as credit card, credit line on UPI, prepaid card, fixed deposits, and loans. Since our 2022 launch, weve processed USD 1 Bn+ in transactions, signed on 12 of India's top financial institutions, & clocked USD 15 Mn+ in revenue. Our company is backed by marquee investors from around the world, including heavyweight investors from Japan, USA, as well as leading Indian ventures and banks. For more details, please visit https://falconfs.com/ Experience level : Intermediate (6-10 years) Key Responsibilities: 1. Design, develop, and support scalable ETL processes using open source tools and data frameworks like AWS Glue, AWS Athena, redshift, Apache Kafka, Apache Spark, Apache Airflow and Pentaho Data Integration PDI 2. Design, creation and maintenance of data lakes and data warehouse on AWS cloud. 3. Maintain and optimise our data pipeline architecture, and formulate complex SQL queries for big data processing. 4. Collaborate with product and engineering teams to design and develop a platform for data modelling and machine learning operations. 5. Implement various data structures and algorithms to ensure we meet both functional and non-functional requirements. 6. Maintain data privacy and compliance according to industry standards. 7. Develop processes for monitoring and alerting on data quality issues. 8. Continually evaluate new open source technologies and stay updated with the latest data engineering trends. Key Qualifications: 1. Bachelors or Master’s degree in Computer Science, MCA from a reputed institute 2. Minimum of 7years experience in a data engineering role. 3. Experience using Python, Java, or Scala for data processing (Python preferred) 4. Demonstrably deep understanding of SQL and analytical data warehouses. 5. Solid experience with popular database frameworks such as PostgreSQL, MySQL, and MongoDB 6. Knowledge of AWS technologies like , lambda, Athena, glue and redshift 7. Hands-on experience implementing ETL (or ELT best practices at scale. 8. Hands-on experience with data pipeline tools (Airflow, Luigi, Azkaban, dbt) 9. Experience with version control tools like Git. 10. Familiarity with Linux-based systems and cloud services, preferably in environments like AWS. 11. Strong analytical skills and ability to work in an agile and collaborative team environment. Preferred Skills: 1. Certification in any open source big data technologies. 2. Expertise in open source big data technologies like Apache Hadoop, Apache Hive, and others. 3. Familiarity with data visualisation tools like Apache Superset, Grafana, tableau etc. 4. Experience in CI/CD processes and containerization technologies like Docker or Kubernetes. ****IMMEDIATE JOINERS ONLY*****

Posted 2 months ago

Apply

8.0 - 10.0 years

10 - 15 Lacs

Chennai

Work from Office

The Technical Architect will design and implement data pipelines, cloud infrastructure, and AI-driven solutions that support business intelligence and analytics. They will collaborate with Data Engineers, Data Scientists, and Cloud teams to ensure seamless integration of technology solutions. Roles and Responsibilities Data Architecture & Engineering : Design and optimize data pipelines, ETL processes, and data lakes for structured and unstructured data. Cloud Infrastructure : Architect cloud-based solutions using platforms like AWS, Azure, or Google Cloud for scalability and security. Machine Learning & AI Integration : Work with Data Scientists to deploy ML models and AI-driven analytics. Big Data Technologies : Implement Hadoop, Spark, Kafka, and other big data frameworks for high-performance data processing. Security & Compliance : Ensure data governance, encryption, and compliance with industry standards. Performance Optimization : Monitor and enhance data storage, retrieval, and processing efficiency . Stakeholder Collaboration : Work with business leaders, analysts, and IT teams to align technology with business goals. Disaster Recovery & Backup : Develop data recovery strategies for business continuity. Documentation & Best Practices : Maintain technical documentation and enforce best practices in data architecture.

Posted 2 months ago

Apply

4.0 - 8.0 years

10 - 20 Lacs

Kolkata, Gurugram, Bengaluru

Work from Office

Job Opportunity for GCP Data Engineer Role: Data Engineer Location: Gurugram/ Bangalore/Kolkata (5 Days work from office) Experience : 4+ Years Key Skills: Data Analysis / Data Preparation - Expert Dataset Creation / Data Visualization - Expert Data Quality Management - Advanced Data Engineering - Advanced Programming / Scripting - Intermediate Data Storytelling- Intermediate Business Analysis / Requirements Analysis - Intermediate Data Dashboards - Foundation Business Intelligence Reporting - Foundation Database Systems - Foundation Agile Methodologies / Decision Support - Foundation Technical Skills: • Cloud - GCP - Expert • Database systems (SQL and NoSQL / Big Query / DBMS) - Expert • Data warehousing solutions - Advanced • ETL Tools - Advanced • Data APIs - Advanced • Python, Java, and Scala etc. - Intermediate • Some knowledge understanding the basics of distributed systems - Foundation • Some knowledge of algorithms and optimal data structures for analytics - Foundation • Soft Skills and time management skills - Foundation

Posted 2 months ago

Apply

7.0 - 8.0 years

8 - 18 Lacs

Pune

Hybrid

Warm Greetings From Dataceria !!! --------------------------------------------------------------------------------- As a Senior SQL Quality Assurance Tester , / Senior ETL tester Immediate joiners send your resume to carrers@dataceria.com: ------------------------------------------------------------------------------- you will be at the forefront of ensuring the quality and reliability of our data systems. You will play a critical role in analysing raw data, building test frameworks, and validating data products using Python. Collaborating closely with data analytics experts and stakeholders, you will contribute to the stability and functionality of our data pipelines. This role offers an exciting opportunity to work with cutting-edge technologies and make a significant impact on our data engineering processes. Responsibilities: Analyse and organise raw data to meet business needs and objectives. Develop, update, and maintain SQL scripts and test cases as applications and business rules evolve, identifying areas for improvement. Delegate tasks effectively, ensuring timely and accurate completion of deliverables. Partner with stakeholders, including Product, Data and Design teams, to address technical issues and support data engineering needs. Perform root cause analysis of existing models and propose effective solutions for improvement. Serve as a point of contact for cross-functional teams, ensuring the smooth integration of quality assurance practices into workflows. Demonstrate strong time management skills. Lead and mentor a team of SQL testers and data professionals, fostering a collaborative and high-performing environment. What we're looking for in our applicants: +7 years of relevant experience in data engineering and testing roles and team management. Proven experience leading and mentoring teams, with strong organizational and interpersonal skills. Proficiency in SQL testing , with a focus on Snowflake, and experience with Microsoft SQL Server . Advanced skills in writing complex SQL queries . At least intermediate level proficiency in Python programming. Experienced with Python libraries for testing and ensuring data quality. Hands-on experience with Git Version Control System (VCS). Working knowledge of cloud computing architecture(Azure ADO) Experience with data pipeline and workflow management tools like Airflow. Ability to perform root cause analysis of existing models and propose effective solutions. Strong interpersonal skills and a collaborative team player -------------------------------------------------------------------------------------------------------- Nice to have 1.ETL testing broader Knowledge and experience 2.Confluence 3.Strong in SQL queries. 4.Data warehouse 5. Snowflake 6.Cloud platform(Azure ADO) ------------------------------------------------------------------------------------------------------- Joining : Immediate: Work location: Pune (hybrid). Open Positions : 1 -Senior SQL Quality Assurance Tester , If interested please share your updated resume to carrers@dataceria.com : -------------------------------------------------------------------------------------------------------- Dataceria Software Solutions Pvt Ltd Follow our LinkedIn for more job openings : https://www.linkedin.com/company/dataceria/ Email : careers@dataceria.com

Posted 2 months ago

Apply

7.0 - 12.0 years

20 - 22 Lacs

Bengaluru

Remote

Collaborate with senior stakeholders to gather requirements, address constraints, and craft adaptable data architectures. Convert business needs into blueprints, guide agile teams, maintain quality data pipelines, and drive continuous improvements. Required Candidate profile 7+yrs in data roles(Data Architect/Engineer). Skilled in modelling (incl. Data Vault 2.0), Snowflake, SQL/Python, ETL/ELT, CI/CD, data mesh, governance & APIs. Agile; strong stakeholder & comm skills. Perks and benefits As per industry standards

Posted 2 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies