Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
8.0 - 13.0 years
16 - 22 Lacs
Noida, Chennai, Bengaluru
Work from Office
Location : Bangalore, Chennai, Delhi, Pune. Primary Roles And Responsibilities : - Developing Modern Data Warehouse solutions using Databricks and AWS/ Azure Stack. - Ability to provide solutions that are forward-thinking in data engineering and analytics space. - Collaborate with DW/BI leads to understand new ETL pipeline development requirements. - Triage issues to find gaps in existing pipelines and fix the issues. - Work with business to understand the need in reporting layer and develop data model to fulfill reporting needs. - Help joiner team members to resolve issues and technical challenges. - Drive technical discussion with client architect and team members. - Orchestrate the data pipelines in scheduler via Airflow. Skills And Qualifications : - Bachelor's and/or masters degree in computer science or equivalent experience. - Must have total 6+ yrs of IT experience and 3+ years' experience in Data warehouse/ETL projects. - Deep understanding of Star and Snowflake dimensional modelling. - Strong knowledge of Data Management principles. - Good understanding of Databricks Data & AI platform and Databricks Delta Lake Architecture. - Should have hands-on experience in SQL, Python and Spark (PySpark). - Candidate must have experience in AWS/ Azure stack. - Desirable to have ETL with batch and streaming (Kinesis). - Experience in building ETL / data warehouse transformation processes. - Experience with Apache Kafka for use with streaming data / event-based data. - Experience with other Open-Source big data products Hadoop (incl. Hive, Pig, Impala). - Experience with Open Source non-relational / NoSQL data repositories (incl. MongoDB, Cassandra, Neo4J). - Experience working with structured and unstructured data including imaging & geospatial data. - Experience working in a Dev/Ops environment with tools such as Terraform, CircleCI, GIT. - Proficiency in RDBMS, complex SQL, PL/SQL, Unix Shell Scripting, performance tuning and troubleshoot. - Databricks Certified Data Engineer Associate/Professional Certification (Desirable). - Comfortable working in a dynamic, fast-paced, innovative environment with several ongoing concurrent projects. - Should have experience working in Agile methodology. - Strong verbal and written communication skills. - Strong analytical and problem-solving skills with a high attention to detail.
Posted 2 months ago
8.0 - 10.0 years
11 - 18 Lacs
Surat
Work from Office
Role Responsibilities : - Design and implement data pipelines using MS Fabric. - Develop data models to support business intelligence and analytics. - Manage and optimize ETL processes for data extraction, transformation, and loading. - Collaborate with cross-functional teams to gather and define data requirements. - Ensure data quality and integrity in all data processes. - Implement best practices for data management, storage, and processing. - Conduct performance tuning for data storage and retrieval for enhanced efficiency. - Generate and maintain documentation for data architecture and data flow. - Participate in troubleshooting data-related issues and implement solutions. - Monitor and optimize cloud-based solutions for scalability and resource efficiency. - Evaluate emerging technologies and tools for potential incorporation in projects. - Assist in designing data governance frameworks and policies. - Provide technical guidance and support to junior data engineers. - Participate in code reviews and ensure adherence to coding standards. - Stay updated with industry trends and best practices in data engineering. Qualifications : - 8+ years of experience in data engineering roles. - Strong expertise in MS Fabric and related technologies. - Proficiency in SQL and relational database management systems. - Experience with data warehousing solutions and data modeling. - Hands-on experience in ETL tools and processes. - Knowledge of cloud computing platforms (Azure, AWS, GCP). - Familiarity with Python or similar programming languages. - Ability to communicate complex concepts clearly to non-technical stakeholders. - Experience in implementing data quality measures and data governance. - Strong problem-solving skills and attention to detail. - Ability to work independently in a remote environment. - Experience with data visualization tools is a plus. - Excellent analytical and organizational skills. - Bachelor's degree in Computer Science, Engineering, or related field. - Experience in Agile methodologies and project management.
Posted 2 months ago
8.0 - 11.0 years
35 - 37 Lacs
Kolkata, Ahmedabad, Bengaluru
Work from Office
Dear Candidate, We are hiring a Cloud Architect to design and oversee scalable, secure, and cost-efficient cloud solutions. Great for architects who bridge technical vision with business needs. Key Responsibilities: Design cloud-native solutions using AWS, Azure, or GCP Lead cloud migration and transformation projects Define cloud governance, cost control, and security strategies Collaborate with DevOps and engineering teams for implementation Required Skills & Qualifications: Deep expertise in cloud architecture and multi-cloud environments Experience with containers, serverless, and microservices Proficiency in Terraform, CloudFormation, or equivalent Bonus: Cloud certification (AWS/Azure/GCP Architect) Soft Skills: Strong troubleshooting and problem-solving skills. Ability to work independently and in a team. Excellent communication and documentation skills. Note: If interested, please share your updated resume and preferred time for a discussion. If shortlisted, our HR team will contact you. Kandi Srinivasa Delivery Manager Integra Technologies
Posted 2 months ago
6.0 - 11.0 years
20 - 35 Lacs
Hyderabad, Pune, Bengaluru
Hybrid
Experience in end-to-end data pipeline development/troubleshooting using Snowflake and Matillion Cloud. 5+ years of experience in DWH, 2-4 years of experience in implementing DWH on Snowflake using Matillion. Design, develop, and maintain ETL processes using Matillion to extract, transform, and load data into Snowflake and Develop and Debug ETL programs primarily using Matillion Cloud. Collaborate with data architects and business stakeholders to understand data requirements and translate them into technical solutions. We seek a skilled technical professional to lead the end-to-end system and architecture design for our application and infrastructure. Data validation & end to end testing of ETL Objects, Source data analysis and data profiling. Troubleshoot and resolve issues related to Matillion development and data integration. Collaborate with business users to create architecture in alignment with business need. Collaborate in Developing Project requirements for end-to-end Data integration process using ETL for Structured, semi-structured and Unstructured Data. Strong understanding of ELT/ETL and integration concepts and design best practices. Experience in performance tuning of the Matillion Cloud data pipelines and should be able to trouble shoot the issue quickly. Experience in Snowsql, Snow pipe will be added advantage.
Posted 2 months ago
5.0 - 8.0 years
14 - 15 Lacs
Gurugram
Work from Office
Immediate openings for the position of Senior Data Engineer / Technical Lead for one of the reputed company Mynd Integrated Solutions located in Gurgaon Sector 68 Key Skills: 68 years of experience in data engineering, analytics, or related fields . Hands-on expertise in SQL, Python , and modern ETL frameworks (Airflow, dbt, etc.). Proven experience with cloud data platforms like Snowflake, Redshift, or BigQuery. Strong understanding of data modeling, warehousing, and performance optimization . Familiarity with data governance , compliance frameworks (e.g., ISO 27701, GDPR). Experience in delivering dashboards via Power BI, Tableau, or Looker . Excellent communication and stakeholder management skills. Notice Period: Immediate joiners are preferred Experience: 6-8 Years Qualification: Any Graduation CTC that we can offer: As per the market standards, It is work from office from Day 1 (5 days working) Job Location: Gurgaon Sector 68 Interested and serious candidates can send me your updated CV on vishnu.peramsetty@myndsol.com Feel free to contact me for further clarifications if any -- Vishnu Vardhan - 8332951064 Job Title: Senior Data Engineer / Technical Lead Experience: 68 Years Location: Gurgaon Reporting To: Head of Data Business / Chief Data Officer (CDO) Role Overview: We are looking for a dynamic and experienced Senior Data Engineer / Technical Lead to spearhead our foundational data initiatives. This role combines hands-on engineering, strategic thinking, and team leadership to establish scalable infrastructure, implement robust data governance, and deliver actionable analytics for both internal operations and SaaS customers. Key Responsibilities: 1. Leadership & Strategy Define and drive the data engineering vision, architecture, and roadmap. Translate business needs into scalable and performant data infrastructure. Lead a small team (4-5 members) of junior data engineers/analysts. 2. Data Infrastructure & Integration Design, build, and maintain reliable data pipelines and ETL processes . Integrate multiple data sources into a unified data warehouse (e.g., Snowflake, Redshift, BigQuery). Ensure scalable and secure infrastructure for real-time and batch processing. 3. Analytics & Dashboard Delivery Collaborate with product and business stakeholders to identify key KPIs. Deliver initial and ongoing analytics dashboards for internal stakeholders and external SaaS clients. Support productization of analytics and insights in client-facing interfaces. 4. Data Governance & Compliance Implement and monitor data quality checks, access policies, and retention standards. Work closely with compliance teams to ensure alignment with ISMS/PIMS standards. Conduct periodic internal audits and support external compliance reviews. 5. Team Enablement & Mentorship Provide technical guidance, code reviews, and mentoring to junior team members. Foster a culture of continuous improvement, learning, and documentation. Required Skills & Qualifications: 6–8 years of experience in data engineering, analytics, or related fields . Hands-on expertise in SQL, Python , and modern ETL frameworks (Airflow, dbt, etc.). Proven experience with cloud data platforms like Snowflake, Redshift, or BigQuery. Strong understanding of data modeling, warehousing, and performance optimization . Familiarity with data governance , compliance frameworks (e.g., ISO 27701, GDPR). Experience in delivering dashboards via Power BI, Tableau, or Looker . Excellent communication and stakeholder management skills. Preferred: Experience in a SaaS or multi-tenant analytics environment. Exposure to DevOps for Data , CI/CD, and Infrastructure-as-Code tools. Certification in cloud platforms (AWS, GCP, or Azure) or data privacy standards.
Posted 2 months ago
5.0 - 10.0 years
8 - 18 Lacs
Hyderabad, Chennai, Bengaluru
Work from Office
Role: Celonis Data Engineer Skills: Celonis, Celonis EMS, Data Engineer, SQL, PQL, ETL, OCPM Notice Period: 30-45 Days Role & responsibilities : Hands-on experience with Celonis EMS (Execution Management System). Strong SQL skills for data extraction, transformation, and modeling. Proficiency in PQL (Process Query Language) for custom process analytics. Experience integrating Celonis with SAP, Oracle, Salesforce, or other ERP/CRM systems. Knowledge of ETL, data pipelines, and APIs (REST/SOAP). Process Mining & Analytical Skills: Understanding of business process modeling and process optimization techniques. At least one OCPM project experience Ability to analyze event logs and identify bottlenecks, inefficiencies, and automation opportunities. 6-10 years of experience in IT industry with Data Architecture / Business Process out of which 3-4 Years of Experience in process mining, data analytics, or business intelligence. Celonis certification (e.g., Celonis Data Engineer, Business Analyst, or Solution Consultant) is a plus. OCPM experience is a Plus
Posted 2 months ago
2.0 - 5.0 years
18 - 30 Lacs
Bengaluru
Remote
We're seeking a Data Engineer to help build and maintain scalable data pipelines for Intelligine, our AI content platform. Requires 1–2 years' experience, strong Python/SQL skills, and a passion for clean, efficient code and problem-solving.
Posted 2 months ago
2.0 - 5.0 years
4 - 7 Lacs
Bengaluru
Work from Office
Job Title: ML Engineer Location: Bengaluru, KA (Hybrid) Duration: 12 Months (Possible extension) Job Type: Contract Role Job Description: Role Overview: Client is seeking a Machine Learning Engineer to develop, deploy, and optimize ML models for industrial and business applications The ideal candidate will have expertise in machine learning, deep learning, MLOps, and cloud-based AI solutions, working closely with data scientists, software engineers, and domain experts to enhance decision-making and automation within the organization, Required Skills & Experience: Technical Expertise: 3-7 years of experience in machine learning, deep learning, and AI model development, Proficiency in Python, TensorFlow, PyTorch, Scikit-Learn, and MLflow, Strong background in statistics, probability, and optimization techniques, Experience with cloud ML tools (Azure ML, AWS SageMaker, GCP Vertex AI), Knowledge of big data technologies (Spark, Hadoop, Databricks) and SQL/NoSQL databases, Experience in computer vision (OpenCV, YOLO, Detectron) or NLP (Transformers, BERT, GPT models) is a plus, Data Engineering & MLOps: Strong experience in building data pipelines, feature stores, and CI/CD for ML models, Proficiency in Docker, Kubernetes, Airflow, and model versioning techniques, Hands-on experience with vector databases and embedding techniques, Problem-Solving & Business Impact: Ability to translate business problems into ML solutions, Strong analytical mindset and problem-solving skills, Knowledge of AI ethics, bias mitigation, and explainability in ML models, Preferred Qualifications: Bachelors/Masters/PhD in Computer Science, Data Science, AI, or related fields, Experience in industrial AI applications, IoT-based analytics, or digital transformation projects, Exposure to edge AI, federated learning, and real-time ML applications
Posted 2 months ago
1.0 - 4.0 years
3 - 6 Lacs
Coimbatore
Work from Office
About Responsive Responsive, formerly RFPIO, is the market leader in an emerging new category of SaaS solutions called Strategic Response Management Responsive customers including Google, Microsoft, Blackrock, T Rowe Price, Adobe, Amazon, Visa and Zoom are using Responsive to manage business critical responses to RFPs, RFIs, RFQs, security questionnaires, due diligence questionnaires and other requests for information Responsive has nearly 2,000 customers of all sizes and has been voted ?best in class? by G2 for 13 quarters straight It also has more than 35% of the cloud SaaS leaders as customers, as well as more than 15 of the Fortune 100 Customers have used Responsive to close more than $300B in transactions to-date, About The Role We are seeking a highly skilled Product Data Engineer with expertise in building, maintaining, and optimizing data pipelines using Python scripting The ideal candidate will have experience working in a Linux environment, managing large-scale data ingestion, processing files in S3, and balancing disk space and warehouse storage efficiently This role will be responsible for ensuring seamless data movement across systems while maintaining performance, scalability, and reliability, Essential Functions ETL Pipeline Development: Design, develop, and maintain efficient ETL workflows using Python to extract, transform, and load data into structured data warehouses, Data Pipeline Optimization: Monitor and optimize data pipeline performance, ensuring scalability and reliability in handling large data volumes, Linux Server Management: Work in a Linux-based environment, executing command-line operations, managing processes, and troubleshooting system performance issues, File Handling & Storage Management: Efficiently manage data files in Amazon S3, ensuring proper storage organization, retrieval, and archiving of data, Disk Space & Warehouse Balancing: Proactively monitor and manage disk space usage, preventing storage bottlenecks and ensuring warehouse efficiency, Error Handling & Logging: Implement robust error-handling mechanisms and logging systems to monitor data pipeline health, Automation & Scheduling: Automate ETL processes using cron jobs, Airflow, or other workflow orchestration tools, Data Quality & Validation: Ensure data integrity and consistency by implementing validation checks and reconciliation processes, Security & Compliance: Follow best practices in data security, access control, and compliance while handling sensitive data, Collaboration with Teams: Work closely with data engineers, analysts, and product teams to align data processing with business needs, Education Bachelors degree in Computer Science, Data Engineering, or a related field, Long Description 2+ years of experience in ETL development, data pipeline management, or backend data engineering, Proficiency in Python: Strong hands-on experience in writing Python scripts for ETL processes, Linux Expertise: Experience working with Linux servers, command-line operations, and system performance tuning, Cloud Storage Management: Hands-on experience with Amazon S3, including handling file storage, retrieval, and lifecycle policies, Data Pipeline Management: Experience with ETL frameworks, data pipeline automation, and workflow scheduling (e-g , Apache Airflow, Luigi, or Prefect), SQL & Database Handling: Strong SQL skills for data extraction, transformation, and loading into relational databases and data warehouses, Disk Space & Storage Optimization: Ability to manage disk space efficiently, balancing usage across different systems, Error Handling & Debugging: Strong problem-solving skills to troubleshoot ETL failures, debug logs, and resolve data inconsistencies, Experience with cloud data warehouses (e-g , Snowflake, Redshift, BigQuery), Knowledge of message queues (Kafka, RabbitMQ) for data streaming, Familiarity with containerization tools (Docker, Kubernetes) for deployment, Exposure to infrastructure automation tools (Terraform, Ansible), Knowledge, Ability & Skills Strong analytical mindset and ability to handle large-scale data processing efficiently, Ability to work independently in a fast-paced, product-driven environment,
Posted 2 months ago
4.0 - 8.0 years
10 - 20 Lacs
Gurugram
Remote
US Shift- 5 working days. Remote Work. (US Airline Group) Bachelor's or Master's degree in Computer Science, Information Technology, or a related field. Strong focus on AWS and PySpark. Knowledge of AWS services, including but not limited to S3, Redshift, Athena, EMR, and Glue. Proficiency in PySpark and related Big Data technologies for ETL processing. Strong SQL skills for data manipulation and querying. Familiarity with data warehousing concepts and dimensional modeling. Experience with data governance, data quality, and data security practices. Excellent problem-solving skills and attention to detail. Strong communication and collaboration skills to work effectively with cross-functional teams.
Posted 2 months ago
3.0 - 8.0 years
9 - 18 Lacs
Hyderabad
Hybrid
Data Engineer with Python development experience Experience: 3+ Years Mode: Hybrid (2-3 days/week) Location: Hyderabad Key Responsibilities Develop, test, and deploy data processing pipelines using AWS Serverless technologies such as AWS Lambda, Step Functions, DynamoDB, and S3. Implement ETL processes to transform and process structured and unstructured data eiciently. Collaborate with business analysts and other developers to understand requirements and deliver solutions that meet business needs. Write clean, maintainable, and well-documented code following best practices. Monitor and optimize the performance and cost of serverless applications. Ensure high availability and reliability of the pipeline through proper design and error handling mechanisms. Troubleshoot and debug issues in serverless applications and data workows. Stay up-to-date with emerging technologies in the AWS and serverless ecosystem to recommend improvements. Required Skills and Experience 3-5 years of hands-on Python development experience, including experience with libraries like boto3, Pandas, or similar tools for data processing. Strong knowledge of AWS services, especially Lambda, S3, DynamoDB, Step Functions, SNS, SQS, and API Gateway. Experience building data pipelines or workows to process and transform large datasets. Familiarity with serverless architecture and event-driven programming. Knowledge of best practices for designing secure and scalable serverless applications. Prociency in version control systems (e.g., Git) and collaboration tools. Understanding of CI/CD pipelines and DevOps practices. Strong debugging and problem-solving skills. Familiarity with database systems, both SQL (e.g., RDS) and NoSQL (e.g., DynamoDB). Preferred Qualications AWS certications (e.g., AWS Certied Developer Associate or AWS Certied Solutions Architect Associate). Familiarity with testing frameworks (e.g., pytest) and ensuring test coverage for Python applications. Experience with Infrastructure as Code (IaC) tools such as AWS CDK, CloudFormation. Knowledge of monitoring and logging tools . Apply for Position
Posted 2 months ago
10.0 - 15.0 years
15 - 19 Lacs
Noida
Work from Office
DE Architect Objective 1 : Develop and Implement Metadata-Driven Framework for Medallion Architecture Strong in data modeling and pipeline design Experience with metadata driven frameworks and governance practices Strong analytical skills to identify and reduce redundancies Knowledge of Snowflake and Medallion Architecture Objective 2 : Optimize Data Pipeline Performance and Reliability Expertise in data pipeline optimization and performance tuning Experience with Indexing and efficient orchestration techniques Ability to identify and implement cost-saving measures Knowledge of monitoring tools and processes Objective 3 : Enhance data modelling and Reusability Strong Communication and training skills Experience in data modeling and reusable asset creation Able to identify and train Subject matter experts Proficiency in gathering and analyzing Stakeholder Feedback Objective 4 : Strengthen Devops Practices and Documentation Knowledge of version control and release processes Experience of DevOps process and CI/CD pipelines Ability to establish and maintain data asset frameworks Strong Documentation skills Objective 5 : Lead and Develop the Data Engineering Team Leadership and team management skills Experience in conducting performance reviews and skill development plans Ability to establish and lead a center of Excellence(CoE) Proficiency in using tasking and estimation tools like Jira and DevOps
Posted 2 months ago
3.0 - 5.0 years
13 - 23 Lacs
Bengaluru
Hybrid
Role Overview: We are looking BI + data-engineer kind of skills. The main work is to design & build complex Tableau dashboards, set up robust automated data pipelines and backend tables and do this in an optimized manner. There is also LLM integration into the dashboard, which will require some initiative and exploration. Automations + LLM integration would require some Python experience Job Description: Requirements 3+5 Years of experience in a product analytics, data analyst, or analytics product role Hands on experience in advanced SQL, python Advanced Tableau experience (Min 2yrs hands on experience in Tableau mandatory) Experience in LLM integration into the dashboard • Proven ability to support and improve analytical products that drive business decisions Roles and Responsibilities: Product Thinking & Ownership Product development exposure : Experience working on analytics or data products throughout the product lifecycle - from requirements to delivery. Prioritization skills : Can independently manage a backlog, help triage requests and contribute to roadmap planning. Analytical & Technical Proficiency Strong SQL skills : Can write and optimize queries to explore data and validate product behavior or outcomes. Data visualization : Comfortable building dashboards or reports in tools like Tableau Cross-functional skills : Experience working with product managers, analysts, and engineers to drive product outcomes. Comfort with data pipelines : Understands how data flows through systems; collaborates effectively with data engineers and analytics teams. Model consumption : Understands the basics of AI/ML model outputs and how they feed into products
Posted 2 months ago
8.0 - 11.0 years
35 - 37 Lacs
Kolkata, Ahmedabad, Bengaluru
Work from Office
Dear Candidate, Looking for a Cloud Data Engineer to build cloud-based data pipelines and analytics platforms. Key Responsibilities: Develop ETL workflows using cloud data services. Manage data storage, lakes, and warehouses. Ensure data quality and pipeline reliability. Required Skills & Qualifications: Experience with BigQuery, Redshift, or Azure Synapse. Proficiency in SQL, Python, or Spark. Familiarity with data lake architecture and batch/streaming. Soft Skills: Strong troubleshooting and problem-solving skills. Ability to work independently and in a team. Excellent communication and documentation skills. Note: If interested, please share your updated resume and preferred time for a discussion. If shortlisted, our HR team will contact you. Kandi Srinivasa Delivery Manager Integra Technologies
Posted 2 months ago
6.0 - 8.0 years
0 - 0 Lacs
Bengaluru
Remote
Azure DE Primary Responsibilities - Create and maintain data storage solutions including Azure SQL Database, Azure Data Lake, and Azure Blob Storage. Design, implement, and maintain data pipelines for data ingestion, processing, and transformation in Azure Create data models for analytics purposes Utilizing Azure Data Factory or comparable technologies, create and maintain ETL (Extract, Transform, Load) operations Use Azure Data Factory and Databricks to assemble large, complex data sets Implementing data validation and cleansing procedures will ensure the quality, integrity, and dependability of the data. Ensure data security and compliance Collaborate with data engineers, and other stakeholders to understand requirements and translate them into scalable and reliable data platform architectures Required skills: Blend of technical expertise, analytical problem-solving, and collaboration with cross-functional teams Azure DevOps Apache Spark, Python SQL proficiency Azure Databricks knowledge Big data technologies The DEs should be well versed in coding, spark core and data ingestion using Azure. Moreover, they need to be decent in terms of communication skills. They should also have core Azure DE skills and coding skills (pyspark, python and SQL).
Posted 2 months ago
3 - 6 years
20 - 30 Lacs
Chennai
Hybrid
The role will require a unique blend of strong DataOps technical and design skills to translate business decisions into data requirements. This individual will build a deep understanding of the infrastructure data we use in order to work across the ID&A team and key stakeholders the appropriate data to tell a data story. This includes implementing and maintaining a data architecture that follows data management best practices for ensuring data ingestion, transformation, storage and analytics are handled according to their specific purpose using the appropriate tools: ingestion captures raw data without applying business logic, transformation processes data discretely for auditability, and analytical queries retrieve structured outputs without relying on upstream processes. They will be responsible for building and automating data pipelines to maximize data availability and efficiency, as well as migrating the data model and transformations to our target architecture. This individual will bring passion for data-driven decisions, enterprise solutions, and collaboration to the role, transforming platform data into actionable insights by utilizing data engineering and data visualization best practices. Key responsibilities include: Data Architecture: Perform all technical aspects of data architecture and database management for ID&A, including developing data pipelines, new database structures and APIs as applicable Data Design: Translate logical data architectures into physical data designs, ensuring alignment with data modeling best practices and standards Data Process and Monitoring: Ensure proper data ingestion, validation, testing, and monitoring for ID&A data lake Data Quality Testing: Develop and provide subject matter expertise on data analysis, testing and Quality Assurance (QA) methodologies and processes Support database and data platform administration for initiatives building, enhancing, or maintaining databases, data warehouses and data pipelines Data Migration: Design and support migration to a technology-agnostic data model that decouples data architecture from specific tools or platform used for storage, processing, or access Data Integrity: Ensure accuracy, completeness, and data quality, independent of upstream or downstream systems; collaborate with data owners to validate and refine data sources where applicable Agile Methodologies: Function as a senior member of an agile feature team and manage data assets as per the enterprise standards, guidelines and policies Collaboration: Partner closely with business intelligence team to capture and define data requirements for new and enhanced data visualizations Prioritization: Work with product teams to prioritize new features for ongoing sprints and manage backlog Continuous Improvement: Monitor performance and make recommendations for areas of opportunity/improvement for automation and tool usage Compliance: Understand and abide by SDLC standards and policies Liaison: Act as point of contact for data-related inquiries and data access requests Innovation: Leverage the evolving technical landscape as needed, including AI, Big Data, Machine Learning and other technologies to deliver meaningful business insights Minimum Requirements: 4+ years of DataOps engineering experience in implementing pipeline orchestration, data quality monitoring, governance, security processes, and self-service data access Experience managing databases, ETL/ELT pipelines, data lake architectures, and real-time processing Proficiency in API development and stream processing frameworks Hands-on coding experience in Python Hands-on expertise with design and development across one or more database management systems (e.g. SQL Server, PostgreSQL, Oracle) Testing and Troubleshooting: Ability to test, troubleshoot, and debug data processes Strong analytical skills with a proven ability to understand and document business data requirements in complete, accurate, extensible and flexible logical data models, data visualization tools (e.g. Apptio BI, PowerBI) Ability to write efficient SQL queries to extract and manipulate data from relational databases, data warehouses and batch processing systems Experience in data quality and QA testing methodologies Fluent in data risk, management, and compliance terminology and best practices Proven track record for managing large, complex ecosystems with multiple stakeholders Self-starter who is able to problem-solve effectively, organize and document processes, and prioritize feature with limited guidance An enterprise mindset that connects the dots across various requirements and the broader operations/infrastructure data architecture landscape Excellent collaboration skills; ability to drive consensus and tangible outcomes, demonstrated by breaking down silos and fostering cross-team communication Understanding of complex software delivery including build, test, deployment, and operations; conversant in AI, Data Science, and Business Intelligence concepts and technology stack Exposure to distributed (multi-tiered) systems, algorithms, IT asset management, cloud services, and relational databases Foundational Public Cloud (AWS, Google, Microsoft) certification; advanced Public Cloud certifications a plus Experience working in technology business management, technology infrastructure or data visualization teams a plus Experience with design and coding across multiple platforms and languages a plus Bachelors Degree in computer science, computer science engineering, data engineering, or related field required; advanced degree preferred
Posted 2 months ago
8 - 10 years
10 - 15 Lacs
Chennai
Work from Office
The IT Architect will design and implement data pipelines, cloud infrastructure, and AI-driven solutions that support business intelligence and analytics. They will collaborate with Data Engineers, Data Scientists, and Cloud teams to ensure seamless integration of technology solutions. Roles and Responsibilities Data Architecture & Engineering : Design and optimize data pipelines, ETL processes, and data lakes for structured and unstructured data. Cloud Infrastructure : Architect cloud-based solutions using platforms like AWS, Azure, or Google Cloud for scalability and security. Machine Learning & AI Integration : Work with Data Scientists to deploy ML models and AI-driven analytics. Big Data Technologies : Implement Hadoop, Spark, Kafka, and other big data frameworks for high-performance data processing. Security & Compliance : Ensure data governance, encryption, and compliance with industry standards. Performance Optimization : Monitor and enhance data storage, retrieval, and processing efficiency . Stakeholder Collaboration : Work with business leaders, analysts, and IT teams to align technology with business goals. Disaster Recovery & Backup : Develop data recovery strategies for business continuity. Documentation & Best Practices : Maintain technical documentation and enforce best practices in data architecture.
Posted 2 months ago
8 - 12 years
13 - 20 Lacs
Hyderabad
Remote
About The Role : We are seeking a highly skilled and experienced Senior PySpark/Python Developer to play a critical role in building a robust and reliable system for managing and disseminating customer notifications regarding PG&E's Planned Power Outages (PPOs). This is an exciting opportunity to tackle complex data challenges within a dynamic environment and contribute directly to improving customer communication and experience. As a key member of the team, you will be responsible for developing and implementing data processing pipelines that can ingest, transform, and synthesize information from various backend systems related to PPOs. Your primary goal will be to "eat ambiguity and excrete certainty" by taking complex, ever-changing data and producing clear, consistent, and timely notifications for PG&E customers.This role requires a strong individual contributor who can execute tasks within a defined scope while also demonstrating leadership qualities such as adaptability, continuous learning, and ownership. You should be comfortable navigating ambiguous situations, proactively identifying solutions, and providing direction even when clear, pre-defined solutions are not immediately apparent. Responsibilities : - Design, develop, and maintain scalable and efficient data processing pipelines using PySpark and Python.- Leverage your expertise in data engineering principles to build robust and reliable solutions.- Work with complex and dynamic datasets related to work requests and planned power outages.- Apply your knowledge of Palantir Foundry to integrate with existing data infrastructure and potentially build new modules or workflows.- Develop solutions to consolidate and finalize information about planned power outages from disparate systems.- Implement logic to ensure accurate and timely notifications are generated for customers, minimizing confusion and inconsistencies.- Identify and address edge cases and special circumstances within the PPO process.- Collaborate effectively with cross-functional teams, including business analysts, product owners, and other engineers.- Take ownership of technical challenges and drive them to resolution.- Proactively learn and adapt to new technologies and evolving requirements.- Contribute to the development of technical documentation and best practices. Required Skills And Experience : - Minimum of 8+ years of overall experience in software development with a strong focus on data engineering.- Extensive and demonstrable experience with PySpark for large-scale data processing.- Strong proficiency in Python and its relevant data engineering libraries.- Hands-on experience with Palantir Foundry and its core functionalities (e.g., Ontology, Pipelines, Actions, Contour).- Solid understanding of data modeling, data warehousing concepts, and ETL/ELT processes.- Experience working with complex and high-volume datasets.- Ability to write clean, efficient, and well-documented code.- Excellent problem-solving and analytical skills.- Strong communication and collaboration skills.- Ability to work independently and manage priorities effectively in a remote setting.- Demonstrated ability to take ownership and drive tasks to completion.- Comfort navigating ambiguous situations and proposing solutions.- A proactive and continuous learning mindset. Nice To Have : - Experience with cloud platforms (e.g., AWS, Azure, GCP).- Familiarity with data visualization tools.- Understanding of notification systems and best practices. - Prior experience in the utilities or energy sector.
Posted 2 months ago
8 - 12 years
35 - 60 Lacs
Bengaluru
Work from Office
Job Summary NetApp is a cloud-led, data-centric software company that helps organizations put data to work in applications that elevate their business. We help organizations unlock the best of cloud technology. As a member of Solutions Integration Engineering you work cross-functionally to define and create engineered solutions /products which would accelerate the field adoption. We work closely with ISV’s and with the startup ecosystem in the Virtualization, Cloud, and AI/ML domains to build solutions that matter for the customers You will work closely with product owner and product lead on the company's current and future strategies related to said domains. Job Requirements • Lead to deliver features, including participating in the full software development lifecycle. • Deliver reliable, innovative solutions and products • Participate in product design, development, verification, troubleshooting, and delivery of a system or major subsystems, including authoring project specifications. • Work closely with cross-functional teams including business stakeholders to innovate and unlock new use-cases for our customers • Write unit and automated integrationtests and project documentation • Mentor the junior’s in the team Technical Skills • Understanding of Software development lifecycle • Proficiency in full stack development ~ Python, Container Ecosystem, Cloud and Modern ML frameworks • Knowledge of Data storage and Artificial intelligence concepts including server/storage architecture, batch/stream processing, data warehousing, data lakes, distributed filesystems, OLTP/OLAP databases and data pipelining tools, model, inferencing as well as RAG workflows. • Exposure on Data pipeline, integrations and Unix based operating system kernels and development environments, e.g. Linux or FreeBSD. • A strong understanding of basic to complex concepts related to computer architecture, data structures, and new programming paradigms • Demonstrated creative and systematic approach to problem solving. • Possess excellent written and verbal communication skills. Education • Minimum 8 years of experience and must be hands-on with coding. • B.E/B.Tech or M.S in Computer Science or related technical field.
Posted 2 months ago
5 - 10 years
20 - 35 Lacs
Bengaluru
Work from Office
Position - Pyspark Developer Experience - 5+ yrs Location - Bangalore Notice Period - Immediate - 30 days Roles & Responsibilities: 5+ years of experience as a Data Engineer, with a strong focus on PySpark and the Cloudera Data Platform. PySpark: Advanced proficiency in PySpark, including working with RDDs, Data Frames, and optimization techniques. Cloudera Data Platform: Strong experience with Cloudera Data Platform (CDP) components, including Cloudera Manager, Hive, Impala, HDFS, and HBase. Data Warehousing: Knowledge of data warehousing concepts, ETL best practices, and experience with SQL-based tools (e.g., Hive, Impala). Big Data Technologies: Familiarity with Hadoop, Kafka, and other distributed computing tools. Orchestration and Scheduling: Experience with Apache Oozie, Airflow, or similar orchestration frameworks. Scripting and Automation: Strong scripting skills in Linux. Data Pipeline Development: Design, develop, and maintain highly scalable and optimized ETL pipelines using PySpark on the Cloudera Data Platform, ensuring data integrity and accuracy. Data Ingestion: Implement and manage data ingestion processes from a variety of sources (e.g., relational databases, APIs, file systems) to the data lake or data warehouse on CDP. Data Transformation and Processing: Use PySpark to process, cleanse, and transform large datasets into meaningful formats that support analytical needs and business requirements. Performance Optimization: Conduct performance tuning of PySpark code and Cloudera components, optimizing resource utilization and reducing runtime of ETL processes. Data Quality and Validation: Implement data quality checks, monitoring, and validation routines to ensure data accuracy and reliability throughout the pipeline. Automation and Orchestration: Automate data workflows using tools like Apache Oozie, Airflow, or similar orchestration tools within the Cloudera ecosystem. Monitoring and Maintenance: Monitor pipeline performance, troubleshoot issues, and perform routine maintenance on the Cloudera Data Platform and associated data processes. Collaboration: Work closely with other data engineers, analysts, product managers, and other stakeholders to understand data requirements and support various data-driven initiatives. Documentation: Maintain thorough documentation of data engineering processes, code, and pipeline configurations.
Posted 2 months ago
11 - 20 years
20 - 30 Lacs
Mumbai
Work from Office
Role : Principal ML Ops Architect Responsibilities : 1. Strategic Leadership :a. Define and drive the overall ML Ops strategy and roadmap for the organization, aligning it with business objectives and technical capabilities.b. Oversee the design, development, and implementation of ML Ops platforms, frameworks, and processes.c. Foster a culture of innovation and continuous improvement within the ML Ops team. 2. Technical Architecture :a. Design and implement scalable, reliable, and efficient ML Ops architectures.b. Select and integrate appropriate tools, technologies, and frameworks to support the ML lifecycle.c. Ensure compliance with industry best practices and standards for ML Ops. 3. Team Management :a. Lead and mentor a team of ML Ops engineers and architects.b. Foster collaboration and knowledge sharing among team members.c. Provide technical guidance and support to data scientists and engineers. 4. Innovation and Research :a. Stay up-to-date with emerging ML Ops trends and technologies.b. Research and evaluate new tools and techniques to enhance ML Ops capabilities.c. Contribute to the development of innovative ML Ops solutions. Minimum Required Skills :- 11+ years of experience preferred. - Proven track record of designing and implementing large-scale ML pipelines and infrastructure.- Experience with distributed computing frameworks (Spark, Hadoop)- Knowledge of graph databases and auto ML libraries- Bachelor's / Master's degree in computer science, analytics, mathematics, statistics - Strong experience in Python, SQL.- Solid understanding and knowledge of containerization technologies (Docker, Kubernetes).- Proficient in Experience in CI/CD pipelines, model monitoring, and MLOps platforms (Kubeflow, MLFlow)- Proficiency in cloud platforms, containerization, and ML frameworks (TensorFlow, PyTorch). - Certifications in cloud platforms or ML technologies can be a plus.- Extensive experience with cloud platforms (AWS, GCP, Azure) and containerization technologies (Docker, Kubernetes).- Strong problem-solving and analytical skills.- Ability to plan, execute and take ownership of task. ML Ops / MLOps Architect- Azure DevOps- Docker- Kubernetes- TensorFlow- MLFlow- Pipeline- Machine Learning Platform Engineer- Data Science Platform Engineer- DevOps Engineer (with ML focus)- AI Engineer- Data Engineer- Cloud Engineer (with ML focus)- Software Engineer (with ML focus)- Model Deployment Specialist- MLOps Architect- CI/CD- PyTorch- Scikit-learn- Cloud Computing- Big Data- Azure- Azure Machine Learning- GCP- Vertex AI- AWS- Amazon SageMaker
Posted 2 months ago
11 - 20 years
20 - 30 Lacs
Surat
Work from Office
Role : Principal ML Ops Architect Responsibilities : 1. Strategic Leadership :a. Define and drive the overall ML Ops strategy and roadmap for the organization, aligning it with business objectives and technical capabilities.b. Oversee the design, development, and implementation of ML Ops platforms, frameworks, and processes.c. Foster a culture of innovation and continuous improvement within the ML Ops team. 2. Technical Architecture :a. Design and implement scalable, reliable, and efficient ML Ops architectures.b. Select and integrate appropriate tools, technologies, and frameworks to support the ML lifecycle.c. Ensure compliance with industry best practices and standards for ML Ops. 3. Team Management :a. Lead and mentor a team of ML Ops engineers and architects.b. Foster collaboration and knowledge sharing among team members.c. Provide technical guidance and support to data scientists and engineers. 4. Innovation and Research :a. Stay up-to-date with emerging ML Ops trends and technologies.b. Research and evaluate new tools and techniques to enhance ML Ops capabilities.c. Contribute to the development of innovative ML Ops solutions. Minimum Required Skills :- 11+ years of experience preferred. - Proven track record of designing and implementing large-scale ML pipelines and infrastructure.- Experience with distributed computing frameworks (Spark, Hadoop)- Knowledge of graph databases and auto ML libraries- Bachelor's / Master's degree in computer science, analytics, mathematics, statistics - Strong experience in Python, SQL.- Solid understanding and knowledge of containerization technologies (Docker, Kubernetes).- Proficient in Experience in CI/CD pipelines, model monitoring, and MLOps platforms (Kubeflow, MLFlow)- Proficiency in cloud platforms, containerization, and ML frameworks (TensorFlow, PyTorch). - Certifications in cloud platforms or ML technologies can be a plus.- Extensive experience with cloud platforms (AWS, GCP, Azure) and containerization technologies (Docker, Kubernetes).- Strong problem-solving and analytical skills.- Ability to plan, execute and take ownership of task. ML Ops / MLOps Architect- Azure DevOps- Docker- Kubernetes- TensorFlow- MLFlow- Pipeline- Machine Learning Platform Engineer- Data Science Platform Engineer- DevOps Engineer (with ML focus)- AI Engineer- Data Engineer- Cloud Engineer (with ML focus)- Software Engineer (with ML focus)- Model Deployment Specialist- MLOps Architect- CI/CD- PyTorch- Scikit-learn- Cloud Computing- Big Data- Azure- Azure Machine Learning- GCP- Vertex AI- AWS- Amazon SageMaker
Posted 2 months ago
5 - 10 years
25 - 30 Lacs
Bengaluru
Hybrid
Job Title: Senior Data Engineer Overview: We are seeking a highly skilled and experienced Senior Data Engineer to join our data team. This role is pivotal in designing, building, and maintaining robust data infrastructure to support analytics, reporting, and machine learning initiatives. The ideal candidate will have strong technical expertise in data engineering tools and best practices, and a passion for transforming raw data into actionable insights. Responsibilities: Design, develop, and optimize scalable data pipelines to process large volumes of structured and unstructured data. Build and maintain efficient ETL (Extract, Transform, Load) workflows and automate data integration from various sources. Manage and optimize data warehousing solutions to ensure high availability and performance. Collaborate with machine learning engineers to deploy and maintain ML models in production environments. Ensure high standards of data quality, governance, and consistency across all data systems. Work closely with data scientists, analysts, and business stakeholders to understand data requirements and enhance data models. Monitor, troubleshoot, and improve data infrastructure for reliability and scalability. Qualifications: Proven experience in data engineering with a strong grasp of SQL, Python, and Apache Spark. Hands-on experience in designing and implementing ETL pipelines and data models. Proficient with cloud platforms such as AWS, Google Cloud Platform (GCP), or Microsoft Azure. Deep understanding of big data technologies, including Hadoop, Kafka, Hive, and related tools. Strong problem-solving skills and the ability to work collaboratively in a team-oriented environment. Bachelors or Master’s degree in Computer Science, Engineering, or a related technical field. Preferred Qualifications (Optional): Experience with orchestration tools such as Apache Airflow or Prefect. Familiarity with containerization (Docker, Kubernetes). Knowledge of data security and compliance standards (e.g., GDPR, HIPAA).
Posted 2 months ago
11 - 20 years
20 - 30 Lacs
Nagpur
Work from Office
Role : Principal ML Ops Architect Responsibilities : 1. Strategic Leadership : a. Define and drive the overall ML Ops strategy and roadmap for the organization, aligning it with business objectives and technical capabilities. b. Oversee the design, development, and implementation of ML Ops platforms, frameworks, and processes. c. Foster a culture of innovation and continuous improvement within the ML Ops team. 2. Technical Architecture : a. Design and implement scalable, reliable, and efficient ML Ops architectures. b. Select and integrate appropriate tools, technologies, and frameworks to support the ML lifecycle. c. Ensure compliance with industry best practices and standards for ML Ops. 3. Team Management : a. Lead and mentor a team of ML Ops engineers and architects. b. Foster collaboration and knowledge sharing among team members. c. Provide technical guidance and support to data scientists and engineers. 4. Innovation and Research : a. Stay up-to-date with emerging ML Ops trends and technologies. b. Research and evaluate new tools and techniques to enhance ML Ops capabilities. c. Contribute to the development of innovative ML Ops solutions. Minimum Required Skills : - 11+ years of experience preferred. - Proven track record of designing and implementing large-scale ML pipelines and infrastructure. - Experience with distributed computing frameworks (Spark, Hadoop) - Knowledge of graph databases and auto ML libraries - Bachelor's / Master's degree in computer science, analytics, mathematics, statistics - Strong experience in Python, SQL. - Solid understanding and knowledge of containerization technologies (Docker, Kubernetes). - Proficient in Experience in CI/CD pipelines, model monitoring, and MLOps platforms (Kubeflow, MLFlow) - Proficiency in cloud platforms, containerization, and ML frameworks (TensorFlow, PyTorch). - Certifications in cloud platforms or ML technologies can be a plus. - Extensive experience with cloud platforms (AWS, GCP, Azure) and containerization technologies (Docker, Kubernetes). - Strong problem-solving and analytical skills. - Ability to plan, execute and take ownership of task. Keywords : - ML Ops / MLOps Architect - Azure DevOps - Docker - Kubernetes - TensorFlow - MLFlow - Pipeline - Machine Learning Platform Engineer - Data Science Platform Engineer - DevOps Engineer (with ML focus) - AI Engineer - Data Engineer - Cloud Engineer (with ML focus) - Software Engineer (with ML focus) - Model Deployment Specialist - MLOps Architect - CI/CD - PyTorch - Scikit-learn - Cloud Computing - Big Data - Azure - Azure Machine Learning - GCP - Vertex AI - AWS - Amazon SageMaker
Posted 2 months ago
11 - 20 years
20 - 30 Lacs
Chennai
Work from Office
Role : Principal ML Ops Architect Responsibilities : 1. Strategic Leadership : a. Define and drive the overall ML Ops strategy and roadmap for the organization, aligning it with business objectives and technical capabilities. b. Oversee the design, development, and implementation of ML Ops platforms, frameworks, and processes. c. Foster a culture of innovation and continuous improvement within the ML Ops team. 2. Technical Architecture : a. Design and implement scalable, reliable, and efficient ML Ops architectures. b. Select and integrate appropriate tools, technologies, and frameworks to support the ML lifecycle. c. Ensure compliance with industry best practices and standards for ML Ops. 3. Team Management : a. Lead and mentor a team of ML Ops engineers and architects. b. Foster collaboration and knowledge sharing among team members. c. Provide technical guidance and support to data scientists and engineers. 4. Innovation and Research : a. Stay up-to-date with emerging ML Ops trends and technologies. b. Research and evaluate new tools and techniques to enhance ML Ops capabilities. c. Contribute to the development of innovative ML Ops solutions. Minimum Required Skills : - 11+ years of experience preferred. - Proven track record of designing and implementing large-scale ML pipelines and infrastructure. - Experience with distributed computing frameworks (Spark, Hadoop) - Knowledge of graph databases and auto ML libraries - Bachelor's / Master's degree in computer science, analytics, mathematics, statistics - Strong experience in Python, SQL. - Solid understanding and knowledge of containerization technologies (Docker, Kubernetes). - Proficient in Experience in CI/CD pipelines, model monitoring, and MLOps platforms (Kubeflow, MLFlow) - Proficiency in cloud platforms, containerization, and ML frameworks (TensorFlow, PyTorch). - Certifications in cloud platforms or ML technologies can be a plus. - Extensive experience with cloud platforms (AWS, GCP, Azure) and containerization technologies (Docker, Kubernetes). - Strong problem-solving and analytical skills. - Ability to plan, execute and take ownership of task. Keywords : - ML Ops / MLOps Architect - Azure DevOps - Docker - Kubernetes - TensorFlow - MLFlow - Pipeline - Machine Learning Platform Engineer - Data Science Platform Engineer - DevOps Engineer (with ML focus) - AI Engineer - Data Engineer - Cloud Engineer (with ML focus) - Software Engineer (with ML focus) - Model Deployment Specialist - MLOps Architect - CI/CD - PyTorch - Scikit-learn - Cloud Computing - Big Data - Azure - Azure Machine Learning - GCP - Vertex AI - AWS - Amazon SageMaker
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough