Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
2.0 - 5.0 years
0 - 0 Lacs
Kochi, Coimbatore
Work from Office
Role Summary: We are looking for a Data Engineer who will be responsible for designing and developing scalable data pipelines, managing data staging layers, and integrating multiple data sources through APIs and SQL-based systems. You'll work closely with analytics and development teams to ensure high data quality and availability. Key Responsibilities: Design, build, and maintain robust data pipelines and staging tables. Develop and optimize SQL queries for ETL processes and reporting. Integrate data from diverse APIs and external sources. Ensure data integrity, validation, and version control across systems. Collaborate with data analysts and software engineers to support analytics use cases. Automate data workflows and improve processing efficiency
Posted 1 week ago
4.0 - 8.0 years
0 Lacs
chennai, tamil nadu
On-site
As an experienced professional with 4+ years in data engineering, you will be responsible for the following: - Strong proficiency in writing complex SQL queries, stored procedures, and performance tuning to ensure efficient data retrieval and manipulation. - Expertise in Azure Data Factory (ADF) for creating pipelines, data flows, and orchestrating data movement within the Azure environment. - Proficiency in SQL Server Integration Services (SSIS) for ETL processes, package creation, and deployment to facilitate seamless data integration. - Knowledge of Azure Synapse Analytics for data warehousing, distributed query execution, and integration with various Azure services. - Familiarity with Jupyter Notebooks or Synapse Notebooks for data exploration and transformation. - Understanding of Azure Blob Storage, Data Lake Storage, and their integration with data pipelines for efficient data storage and retrieval. - Experience in Azure Analysis Services for building and managing semantic models to support business intelligence requirements. - Knowledge of various data ingestion methods including batch processing, real-time streaming, and incremental data loads to ensure timely and accurate data processing. Additional Skills that would be advantageous for this role include: - Experience in integrating Fabric with Power BI, Synapse, and other Azure services to enhance data visualization and analytics capabilities. - Setting up CI/CD pipelines for ETL/ELT processes using tools like Azure DevOps or GitHub Actions to streamline the data pipeline deployment process. - Familiarity with tools like Azure Event Hubs or Stream Analytics for large-scale data ingestion to support real-time data processing needs. This position is based in Chennai, India and there is currently 1 open position available.,
Posted 1 week ago
10.0 - 15.0 years
0 Lacs
navi mumbai, maharashtra
On-site
As the COE Solution Development Lead at Teradata, you will be a key thought leader responsible for overseeing the detailed design, development, and maintenance of complex data and analytic solutions. Your role will involve utilizing strong technical and project management skills, as well as team building and mentoring capabilities. You will need to have a deep understanding of Teradata's Solutions Strategy, Technology, Data Architecture, and the partner engagement model. Reporting directly to Teradata's Head of Solution COE, you will play a crucial role in leading a team that develops scalable, efficient, and innovative data and analytics solutions to address complex business problems. Your key responsibilities will include leading the end-to-end process of solution development, designing comprehensive solution architectures, ensuring the flexibility for integration of various data sources and platforms, implementing best practices in data analytics solutions, collaborating with senior leadership, and mentoring a team of professionals to foster a culture of innovation and continuous learning. Additionally, you will work towards delivering solutions on time and within budget, facilitating knowledge sharing across teams, and ensuring that data solutions are scalable, secure, and aligned with the organization's overall technological roadmap. You will collaborate with the COE Solutions lead to transform conceptual solutions into detailed designs and lead a team of Data scientists, Solution engineers, Data engineers, and Software engineers. Furthermore, you will work closely with product development, legal, IT, and business teams to ensure seamless integration of data analytics solutions and the protection of related IP. To qualify for this role, you should have a Bachelor's degree in Computer Science, Engineering, Data Science, or a related field, with a preference for an MS or MBA. You should also possess over 15 years of experience in IT, with at least 10 years in data & analytics solution development and 4+ years in a leadership or senior management position. Along with a proven track record in developing data-driven solutions, you should have experience working with cross-functional teams and a strong understanding of emerging trends in data analytics technologies. We believe you will thrive at Teradata due to our people-first culture, flexible work model, focus on well-being, and commitment to Diversity, Equity, and Inclusion. If you are a collaborative, analytical, and innovative professional with excellent communication skills and a passion for data analytics, we invite you to join us in solving business challenges and driving enterprise analytics forward.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
Join our fast-growing data team at the forefront of cloud data architecture and innovation. We are focused on building scalable, secure, and modern data platforms using cutting-edge Snowflake and other modern data stack technologies. If you are passionate about creating high-performance data infrastructure and solving complex data challenges in a cloud-native environment, this opportunity is perfect for you. As a Senior Data Engineer specializing in Snowflake and the modern data stack, your role will involve architecting and implementing enterprise-grade cloud-native data warehousing solutions. This hands-on engineering position offers significant architectural influence, where you will collaborate extensively with dbt, Fivetran, and other modern data tools to create efficient, maintainable, and scalable data pipelines using ELT-first approaches. Your responsibilities will include showcasing technical expertise in Snowflake Mastery, dbt Proficiency, Data Ingestion, SQL & Data Modeling, Cloud Platforms, Orchestration, Programming, and DevOps. Additionally, you will be expected to contribute to Data Management by understanding data governance frameworks, data quality practices, and data visualization tools. Preferred qualifications and certifications include a Bachelor's degree in Computer Science or related field, substantial hands-on experience in data engineering with a focus on cloud data warehousing, and relevant certifications such as Snowflake SnowPro and dbt Analytics Engineering. Your work will revolve around designing and implementing robust data warehouse solutions, architecting ELT pipelines, building automated data ingestion processes, maintaining data transformation workflows, and developing data modeling best practices. You will optimize Snowflake warehouse performance, implement data quality tests and monitoring, build CI/CD pipelines, and collaborate with analytics teams to support self-service data access. Valtech offers an international network of data professionals, continuous development opportunities, and a culture that values freedom and responsibility. We are committed to creating an equitable workplace that supports individuals from diverse backgrounds to thrive, grow, and achieve their goals. If you are ready to push the boundaries of innovation and creativity in a supportive environment, we encourage you to apply and join the Valtech team.,
Posted 1 week ago
8.0 - 13.0 years
7 - 11 Lacs
Pune
Work from Office
Capco, a Wipro company, is a global technology and management consulting firm. Awarded with Consultancy of the year in the British Bank Award and has been ranked Top 100 Best Companies for Women in India 2022 by Avtar & Seramount . With our presence across 32 cities across globe, we support 100+ clients acrossbanking, financial and Energy sectors. We are recognized for our deep transformation execution and delivery. WHY JOIN CAPCO You will work on engaging projects with the largest international and local banks, insurance companies, payment service providers and other key players in the industry. The projects that will transform the financial services industry. MAKE AN IMPACT Innovative thinking, delivery excellence and thought leadership to help our clients transform their business. Together with our clients and industry partners, we deliver disruptive work that is changing energy and financial services. #BEYOURSELFATWORK Capco has a tolerant, open culture that values diversity, inclusivity, and creativity. CAREER ADVANCEMENT With no forced hierarchy at Capco, everyone has the opportunity to grow as we grow, taking their career into their own hands. DIVERSITY & INCLUSION We believe that diversity of people and perspective gives us a competitive advantage. MAKE AN IMPACT JOB SUMMARY: Position Sr Consultant Location Capco Locations (Bengaluru/ Chennai/ Hyderabad/ Pune/ Mumbai/ Gurugram) Band M3/M4 (8 to 14 years) Role Description: Job TitleSenior Consultant - Data Engineer Responsibilities Design, build and optimise data pipelines and ETL processes in Azure Databricks ensuring high performance, reliability, and scalability. Implement best practices for data ingestion, transformation, and cleansing to ensure data quality and integrity. Work within clients best practice guidelines as set out by the Data Engineering Lead Work with data modellers and testers to ensure pipelines are implemented correctly. Collaborate as part of a cross-functional team to understand business requirements and translate them into technical solutions. Role Requirements Strong Data Engineer with experience in Financial Services Knowledge of and experience building data pipelines in Azure Databricks Demonstrate a continual desire to implement strategic or optimal solutions and where possible, avoid workarounds or short term tactical solutions Work within an Agile team Experience/Skillset 8+ years experience in data engineering Good skills in SQL, Python and PySpark Good knowledge of Azure Databricks (understanding of delta tables, Apache Spark, Unity Catalog) Experience writing, optimizing, and analyzing SQL and PySpark code, with a robust capability to interpret complex data requirements and architect solutions Good knowledge of SDLC Familiar with Agile/Scrum ways of working Strong verbal and written communication skills Ability to manage multiple priorities and deliver to tight deadlines WHY JOIN CAPCO You will work on engaging projects with some of the largest banks in the world, on projects that will transform the financial services industry. We offer A work culture focused on innovation and creating lasting value for our clients and employees Ongoing learning opportunities to help you acquire new skills or deepen existing expertise A flat, non-hierarchical structure that will enable you to work with senior partners and directly with clients A diverse, inclusive, meritocratic culture We offer: A work culture focused on innovation and creating lasting value for our clients and employees Ongoing learning opportunities to help you acquire new skills or deepen existing expertise A flat, non-hierarchical structure that will enable you to work with senior partners and directly with clients #LI-Hybrid
Posted 1 week ago
4.0 - 9.0 years
6 - 10 Lacs
Bengaluru
Work from Office
Job Summary: We are looking for a talented Data Engineer cum Database Developer with a strong background in the banking sector. The ideal candidate will have experience with SQL Server, AWS PostgreSQL, AWS Glue, and ETL tools, along with expertise in data ingestion frameworks and Control-M scheduling Key Responsibilities: Design, develop, and maintain scalable data pipelines to support data ingestion and transformation processes. Collaborate with cross-functional teams to gather requirements and implement solutions tailored to banking applications. Utilize SQL Server and AWS PostgreSQL for database development, optimization, and management. Implement data ingestion frameworks to ensure efficient and reliable data flow. Develop and maintain ETL processes using AWS Glue / other ETL tool, Control-M for scheduling Ensure data quality and integrity through validation and testing processes. Monitor and optimize system performance to support business analytics and reporting needs. Document data architecture, processes, and workflows for reference and compliance purposes. Stay updated on industry trends and best practices related to data engineering and management. Qualifications: Bachelors degree in computer science, Information Technology, or a related field. 4+ years of experience in data engineering and database development, preferably in the banking sector. Proficiency in SQL Server and AWS PostgreSQL. Experience with Databricks/ AWS Glue or any other ETL tools (e.g., Informatica, ADF). Strong understanding of data ingestion frameworks and methodologies. Excellent problem-solving skills and attention to detail. Knowledge of Securitization in the banking industry would be plus Strong communication skills for effective collaboration with stakeholders. Familiarity with cloud-based data architectures and services. Experience with data warehousing concepts and practices. Knowledge of data privacy and security regulations in banking.
Posted 1 week ago
2.0 - 6.0 years
12 - 16 Lacs
Kochi
Work from Office
As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs. Your primary responsibilities include: Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements. Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization. Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Developed the Pysprk code for AWS Glue jobs and for EMR. Worked on scalable distributed data system using Hadoop ecosystem in AWS EMR, MapR distribution.. Developed Python and pyspark programs for data analysis. Good working experience with python to develop Custom Framework for generating of rules (just like rules engine). Developed Hadoop streaming Jobs using python for integrating python API supported applications.. Developed Python code to gather the data from HBase and designs the solution to implement using Pyspark. Apache Spark DataFrames/RDD's were used to apply business transformations and utilized Hive Context objects to perform read/write operations.. Re - write some Hive queries to Spark SQL to reduce the overall batch time Preferred technical and professional experience Understanding of Devops. Experience in building scalable end-to-end data ingestion and processing solutions Experience with object-oriented and/or functional programming languages, such as Python, Java and Scala
Posted 1 week ago
2.0 - 6.0 years
12 - 16 Lacs
Bengaluru
Work from Office
As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs.Your primary responsibilities include: Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements. Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization. Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Developed the Pysprk code for AWS Glue jobs and for EMR. Worked on scalable distributed data system using Hadoop ecosystem in AWS EMR, MapR distribution.. Developed Python and pyspark programs for data analysis. Good working experience with python to develop Custom Framework for generating of rules (just like rules engine). Developed Hadoop streaming Jobs using python for integrating python API supported applications.. Developed Python code to gather the data from HBase and designs the solution to implement using Pyspark. Apache Spark DataFrames/RDD's were used to apply business transformations and utilized Hive Context objects to perform read/write operations.. Re- write some Hive queries to Spark SQL to reduce the overall batch time Preferred technical and professional experience Understanding of Devops. Experience in building scalable end-to-end data ingestion and processing solutions Experience with object-oriented and/or functional programming languages, such as Python, Java and Scala
Posted 1 week ago
2.0 - 5.0 years
14 - 17 Lacs
Mysuru
Work from Office
As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs. Your primary responsibilities include: Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements. Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization. Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Must have 5+ years exp in Big Data -Hadoop Spark -Scala ,Python Hbase, Hive Good to have Aws -S3, athena ,Dynomo DB, Lambda, Jenkins GIT Developed Python and pyspark programs for data analysis. Good working experience with python to develop Custom Framework for generating of rules (just like rules engine). Developed Python code to gather the data from HBase and designs the solution to implement using Pyspark. Apache Spark DataFrames/RDD's were used to apply business transformations and utilized Hive Context objects to perform read/write operations. Preferred technical and professional experience Understanding of Devops. Experience in building scalable end-to-end data ingestion and processing solutions Experience with object-oriented and/or functional programming languages, such as Python, Java and Scala
Posted 1 week ago
4.0 - 8.0 years
20 - 35 Lacs
Pune, Gurugram, Bengaluru
Hybrid
Salary: 20 to 35 LPA Exp: 3 to 7 years Location: Gurgaon/Pune/Bengalore Notice: Immediate to 30 days..!! Job Profile: Experienced Data Engineer with a strong foundation in designing, building, and maintaining scalable data pipelines and architectures. Skilled in transforming raw data into clean, structured formats for analytics and business intelligence. Proficient in modern data tools and technologies such as SQL, T-SQL, Python, Databricks, and cloud platforms (Azure). Adept at data wrangling, modeling, ETL/ELT development, and ensuring data quality, integrity, and security. Collaborative team player with a track record of enabling data-driven decision-making across business units. As a Data engineer, Candidate will work on the assignments for one of our Utilities clients. Collaborating with cross-functional teams and stakeholders involves gathering data requirements, aligning business goals, and translating them into scalable data solutions. The role includes working closely with data analysts, scientists, and business users to understand needs, designing robust data pipelines, and ensuring data is accessible, reliable, and well-documented. Regular communication, iterative feedback, and joint problem-solving are key to delivering high-impact, data-driven outcomes that support organizational objectives. This position requires a proven track record of transforming processes, driving customer value, cost savings with experience in running end-to-end analytics for large-scale organizations. Design, build, and maintain scalable data pipelines to support analytics, reporting, and advanced modeling needs. Collaborate with consultants, analysts, and clients to understand data requirements and translate them into effective data solutions. Ensure data accuracy, quality, and integrity through validation, cleansing, and transformation processes. Develop and optimize data models, ETL workflows, and database architectures across cloud and on-premises environments. Support data-driven decision-making by delivering reliable, well-structured datasets and enabling self-service analytics. Provides seamless integration with cloud platforms (Azure), making it easy to build and deploy end-to-end data pipelines in the cloud Scalable clusters for handling large datasets and complex computations in Databricks, optimizing performance and cost management. Must to have Client Engagement Experience and collaboration with cross-functional teams Data Engineering background in Databricks Capable of working effectively as an individual contributor or in collaborative team environments Effective communication and thought leadership with proven record. Candidate Profile: Bachelors/masters degree in economics, mathematics, computer science/engineering, operations research or related analytics areas 3+ years experience must be in Data engineering. Hands on experience on SQL, Python, Databricks, cloud Platform like Azure etc. Prior experience in managing and delivering end to end projects Outstanding written and verbal communication skills Able to work in fast pace continuously evolving environment and ready to take up uphill challenges Is able to understand cross cultural differences and can work with clients across the globe.
Posted 1 week ago
7.0 - 9.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Our Client in India is one of the leading providers of risk, financial services and business advisory, internal audit, corporate governance, and tax and regulatory services. Our Client was established in India in September 1993, and has rapidly built a significant competitive presence in the country. The firm operates from its offices in Mumbai, Pune, Delhi, Kolkata, Chennai, Bangalore, Hyderabad , Kochi, Chandigarh and Ahmedabad, and offers its clients a full range of services, including financial and business advisory, tax and regulatory. Our client has their client base of over 2700 companies. Their global approach to service delivery helps provide value-added services to clients. The firm serves leading information technology companies and has a strong presence in the financial services sector in India while serving a number of market leaders in other industry segments. Job Requirements Mandatory Skills Bachelor s or higher degree in Computer Science or a related discipline or equivalent (minimum 7+ years work experience). At least 6+ years of consulting or client service delivery experience on Azure Microsoft data engineering. At least 4+ years of experience in developing data ingestion, data processing and analytical pipelines for big data, relational databases such as SQL server and data warehouse solutions such as Synapse/Azure Databricks, Microsoft Fabric Hands-on experience implementing data ingestion, ETL and data processing using Azure services: Fabric, onelake, ADLS, Azure Data Factory, Azure Functions, services in Microsoft Fabric etc. Minimum of 5+ years of hands-on experience in Azure and Big Data technologies such as Fabric, databricks, Python, SQL, ADLS/Blob, pyspark/SparkSQL. Minimum of 3+ years of RDBMS experience Experience in using Big Data File Formats and compression techniques. Experience working with Developer tools such as Azure DevOps, Visual Studio Team Server, Git, etc. Preferred Skills Technical Leadership & Demo Delivery: oProvide technical leadership to the data engineering team, guiding the design and implementation of data solutions. oDeliver compelling and clear demonstrations of data engineering solutions to stakeholders and clients, showcasing functionality and business value. oCommunicate fluently in English with clients, translating complex technical concepts into business-friendly language during presentations, meetings, and consultations. ETL Development & Deployment on Azure Cloud: oDesign, develop, and deploy robust ETL (Extract, Transform, Load) pipelines using Azure Data Factory (ADF), Azure Synapse Analytics, Azure Notebooks, Azure Functions, and other Azure services. oEnsure scalable, efficient, and secure data integration workflows that meet business requirements. oPreferably to have following skills Azure doc intelligence, custom app, blob storage oDesign and develop data quality frameworks to validate, cleanse, and monitor data integrity. oPerform advanced data transformations, including Slowly Changing Dimensions (SCD Type 1 and Type 2), using Fabric Notebooks or Databricks. oPreferably to have following skills Azure doc intelligence, custom app, blob storage Microsoft Certifications: oHold relevant role-based Microsoft certifications, such as: DP-203: Data Engineering on Microsoft Azure AI-900: Microsoft Azure AI Fundamentals. oAdditional certifications in related areas (e.g., PL-300 for Power BI) are a plus. Azure Security & Access Management: oStrong knowledge of Azure Role-Based Access Control (RBAC) and Identity and Access Management (IAM). oImplement and manage access controls, ensuring data security and compliance with organizational and regulatory standards on Azure Cloud. Additional Responsibilities & Skills: oTeam Collaboration: Mentor junior engineers, fostering a culture of continuous learning and knowledge sharing within the team. oProject Management: Oversee data engineering projects, ensuring timely delivery within scope and budget, while coordinating with cross-functional teams. oData Governance: Implement data governance practices, including data lineage, cataloging, and compliance with standards like GDPR or CCPA. oPerformance Optimization: Optimize ETL pipelines and data workflows for performance, cost-efficiency, and scalability on Azure platforms. oCross-Platform Knowledge: Familiarity with integrating Azure services with other cloud platforms (e.g., AWS, GCP) or hybrid environments is an added advantage. Soft Skills & Client Engagement: oExceptional problem-solving skills with a proactive approach to addressing technical challenges. oStrong interpersonal skills to build trusted relationships with clients and stakeholders. Ability to manage multiple priorities in a fast-paced environment, ensuring high-quality deliverables.
Posted 1 week ago
3.0 - 8.0 years
4 - 8 Lacs
Hyderabad
Work from Office
Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Data Engineering Good to have skills : NAMinimum 12 year(s) of experience is required Educational Qualification : BTECH Summary :seeking a hands-on Senior Engineering Manager of Data Platform to spearhead the development of capabilities that power Vertex products while providing a connected experience for our customers. This role demands a deep engineering background with hands-on experience in building and scaling production-level systems. The ideal candidate will excel in leading teams to deliver high-quality data products and will provide mentorship, guidance, and leadership.In this role, you will work to increase the domain data coverage and adoption of the Data Platform by promoting a connected user experience through data. You will increase data literacy and trust by leading our Data Governance and Master Data Management initiatives. You will contribute to the vision and roadmap of self-serve capabilities through the Data Platform. Roles & Responsibilities:Be hands-on in leading the development of features that enhance self-service capabilities of our data platform, ensuring the platform is scalable, reliable, and fully aligned with business objectives, and defining and implementing best practices in data architecture, data modeling, and data governance.Work closely with Product, Engineering, and other departments to ensure the data platform meets business requirements.Influence cross-functional initiatives related to data tools, governance, and cross-domain data sharing. Ensure technical designs are thoroughly evaluated and aligned with business objectives.Determine appropriate recruiting of staff to achieve goals and objectives. Interview, recruit, develop and retain top talent.Manage and mentor a team of engineers, fostering a collaborative and high-performance culture, and encouraging a growth mindset and accountability for outcomes. Interpret how the business strategy links to individual roles and responsibilities.Provide career development opportunities and establish processes and practices for knowledge sharing and communication.Partner with external vendors to address issues, and technical challenges.Stay current with emerging technologies and industry trends in field to ensure the platform remains cutting-edge. Professional & Technical Skills: 12+ years of hands-on experience in software development (preferably in the data space), with 3+ years of people management experience, demonstrating success in building, growing, and managing multiple teams.Extensive experience in architecting and building complex data platforms and products. In-depth knowledge of cloud-based services and data tools such as Snowflake, AWS, Azure, with expertise in data ingestion, normalization, and modeling.Strong experience in building and scaling production-level cloud-based data systems utilizing data ingestion tools like Fivetran, Data Quality and Observability tools like Monte Carlo, Data Catalog like Atlan and Master Data tools like Reltio or Informatica.Thorough understanding of best practices regarding agile software development and software testing.Experience of deploying cloud-based applications using automated CI/CD processes and container technologies.Understanding of security best practices when architecting SaaS applications on cloud Infrastructure.Ability to understand complex business systems and a willingness to learn and apply new technologies as needed.Proven ability to influence and deliver high-impact initiatives. Forward-thinking mindset with the ability to define and drive the teams mission, vision, and long-term strategies.Excellent leadership skills with a track record of managing teams and collaborating effectively across departments. Strong written and communication skills.Proven ability to work with and lead remote teams to achieve sustainable long-term success.Work together and Get Stuff Done attitude without losing sight of quality, and a sense of responsibility to customers and the team. Additional Information:- The candidate should have a minimum of 12 years of experience in Data Engineering.- This position is based at our Hyderabad office.- A 15 years full-time education is required. Qualification BTECH
Posted 1 week ago
4.0 - 6.0 years
20 - 25 Lacs
Noida, Pune, Chennai
Work from Office
We are seeking a skilled and detail-oriented Data Engineer with 4 to 6 years of hands-on experience in Microsoft Fabric , Snowflake , and Matillion . The ideal candidate will play a key role in supporting MS Fabric and migrating from MS fabric to Snowflake and Matillion. Roles and Responsibilities Design, develop, and maintain scalable ETL/ELT pipelines using Matillion and integrate data from various sources. Architect and optimize Snowflake data warehouses, ensuring efficient data storage, querying, and performance tuning. Leverage Microsoft Fabric for end-to-end data engineering tasks, including data ingestion, transformation, and reporting. Collaborate with data analysts, scientists, and business stakeholders to deliver high-quality, consumable data products. Implement data quality checks, monitoring, and observability across pipelines. Automate data workflows and support CI/CD practices for data deployments. Troubleshoot performance bottlenecks and data pipeline failures with a root-cause analysis mindset. Maintain thorough documentation of data processes, pipelines, and architecture. trong expertise with: Microsoft Fabric (Dataflows, Pipelines, Lakehouse, Notebooks, etc.) Snowflake (warehouse sizing, SnowSQL, performance tuning) Matillion (ETL/ELT orchestration, job optimization, connectors) Proficiency in SQL and data modeling (dimensional/star schema, normalization). Experience with Python or other scripting languages for data manipulation. Familiarity with version control tools (e.g., Git) and CI/CD workflows. Solid understanding of cloud data architecture (Azure preferred). Strong problem-solving and debugging skills.
Posted 1 week ago
7.0 - 10.0 years
0 - 1 Lacs
Bengaluru
Remote
Job Title: Senior Data Engineer Contractual (Remote | 12 Months Project) Company: Covalensedigital Job Type: Contract (Short-term: 1 to 2 months) Location: Remote Experience: 7+ Years (3+ years in Databricks/Azure Data Engineering) Job Description: We are looking for an experienced Senior Data Engineer for a short-term remote project (12 months) to join Covalensedigital on a contractual basis . Key Responsibilities: Design and implement robust data pipelines using Azure Data Factory (ADF) and Databricks Work on data ingestion , transformation, cleansing, and aggregation from multiple sources Use Python, Spark, SQL for developing scalable data workflows Integrate pipelines with external APIs and systems Ensure data quality, accuracy, and adherence to standards Collaborate with data scientists, analysts, and engineering teams Monitor and troubleshoot data pipelines for smooth operation Must-Have Skills: Python / Spark / SQL / ADLS / Databricks / ADF / ETL 3+ years of hands-on experience in Azure Databricks Deep understanding of large-scale data architecture , data lakes , warehousing , and cloud/on-premise hybrid solutions Strong experience in data cleansing , Azure Data Explorer workflows Ability to work independently and deliver high-quality output within timelines Excellent communication skills Bonus: Experience in Insurance Domain projects Familiarity with data quality frameworks , data cataloging , and data profiling tools Contract Details: Duration: 1 to 2 months Type: Contractual (Remote) Start: Immediate How to Apply: Interested candidates, please send your resume to: kalaivanan.balasubamaniam@covalensedigital.com Thanks kalai 8015302990
Posted 1 week ago
4.0 - 6.0 years
18 - 22 Lacs
Pune
Work from Office
We are looking for a GenAI/ML Engineer to design, develop, and deploy cutting-edge AI/ML models and Generative AI applications . This role involves working on large-scale enterprise use cases, implementing Large Language Models (LLMs) , building Agentic AI systems , and developing data ingestion pipelines . The ideal candidate should have hands-on experience with AI/ML development , Generative AI applications , and a strong foundation in deep learning , NLP , and MLOps practices. Key Responsibilities Design, develop , and deploy AI/ML models and Generative AI applications for various enterprise use cases. Implement and integrate Large Language Models (LLMs) using frameworks such as LangChain , LlamaIndex , and RAG pipelines . Develop Agentic AI systems capable of multi-step reasoning and autonomous decision-making . Create secure and scalable data ingestion pipelines for structured and unstructured data, enabling indexing, vector search, and advanced retrieval techniques. Collaborate with cross-functional teams (Data Engineers, Product Managers, Architects) to deploy AI solutions and enhance the AI stack. Build CI/CD pipelines for ML/GenAI workflows and support end-to-end MLOps practices. Leverage Azure and Databricks for training , serving , and monitoring AI models at scale. Required Qualifications & Skills (Mandatory) 4+ years of hands-on experience in AI/ML development , including Generative AI applications . Expertise in RAG , LLMs , and Agentic AI implementations. Strong experience with LangChain , LlamaIndex , or similar LLM orchestration frameworks. Proficiency in Python and key ML/DL libraries : TensorFlow , PyTorch , Scikit-learn . Solid foundation in Deep Learning , Natural Language Processing (NLP) , and Transformer-based architectures . Experience in building data ingestion , indexing , and retrieval pipelines for real-world enterprise use cases. Hands-on experience with Azure cloud services and Databricks . Proven track record in designing CI/CD pipelines and using MLOps tools like MLflow , DVC , or Kubeflow . Soft Skills Strong problem-solving and critical thinking ability. Excellent communication skills , with the ability to explain complex AI concepts to non-technical stakeholders. Ability to collaborate effectively in agile , cross-functional teams . A growth mindset , eager to explore and learn emerging technologies. Preferred Qualifications Familiarity with vector databases such as FAISS , Pinecone , or Weaviate . Experience with AutoGPT , CrewAI , or similar agent frameworks . Exposure to Azure OpenAI , Cognitive Search , or Databricks ML tools . Understanding of AI security , responsible AI , and model governance . Role Dimensions Design and implement innovative GenAI applications to address complex business problems. Work on large-scale, complex AI solutions in collaboration with cross-functional teams. Take ownership of the end-to-end AI pipeline , from model development to deployment and monitoring. Success Measures (KPIs) Successful deployment of AI and Generative AI applications . Optimization of data pipelines and model performance at scale. Contribution to the successful adoption of AI-driven solutions within enterprise use cases. Effective collaboration with cross-functional teams, ensuring smooth deployment of AI workflows. Competency Alignment AI/ML Development : Expertise in building and deploying scalable and efficient AI models. Generative AI : Strong hands-on experience in Generative AI , LLMs , and RAG frameworks. MLOps : Proficiency in designing and maintaining CI/CD pipelines and implementing MLOps practices . Cloud Platforms : Experience with Azure and Databricks for AI model training and serving.
Posted 1 week ago
4.0 - 8.0 years
20 - 35 Lacs
Pune, Gurugram, Bengaluru
Hybrid
Salary: 20 to 35 LPA Exp: 3 to 7 years Location: Gurgaon/Pune/Bengalore Notice: Immediate to 30 days..!! Job Profile: Experienced Data Engineer with a strong foundation in designing, building, and maintaining scalable data pipelines and architectures. Skilled in transforming raw data into clean, structured formats for analytics and business intelligence. Proficient in modern data tools and technologies such as SQL, T-SQL, Python, Databricks, and cloud platforms (Azure). Adept at data wrangling, modeling, ETL/ELT development, and ensuring data quality, integrity, and security. Collaborative team player with a track record of enabling data-driven decision-making across business units. As a Data engineer, Candidate will work on the assignments for one of our Utilities clients. Collaborating with cross-functional teams and stakeholders involves gathering data requirements, aligning business goals, and translating them into scalable data solutions. The role includes working closely with data analysts, scientists, and business users to understand needs, designing robust data pipelines, and ensuring data is accessible, reliable, and well-documented. Regular communication, iterative feedback, and joint problem-solving are key to delivering high-impact, data-driven outcomes that support organizational objectives. This position requires a proven track record of transforming processes, driving customer value, cost savings with experience in running end-to-end analytics for large-scale organizations. Design, build, and maintain scalable data pipelines to support analytics, reporting, and advanced modeling needs. Collaborate with consultants, analysts, and clients to understand data requirements and translate them into effective data solutions. Ensure data accuracy, quality, and integrity through validation, cleansing, and transformation processes. Develop and optimize data models, ETL workflows, and database architectures across cloud and on-premises environments. Support data-driven decision-making by delivering reliable, well-structured datasets and enabling self-service analytics. Provides seamless integration with cloud platforms (Azure), making it easy to build and deploy end-to-end data pipelines in the cloud Scalable clusters for handling large datasets and complex computations in Databricks, optimizing performance and cost management. Must to have Client Engagement Experience and collaboration with cross-functional teams Data Engineering background in Databricks Capable of working effectively as an individual contributor or in collaborative team environments Effective communication and thought leadership with proven record. Candidate Profile: Bachelors/masters degree in economics, mathematics, computer science/engineering, operations research or related analytics areas 3+ years experience must be in Data engineering. Hands on experience on SQL, Python, Databricks, cloud Platform like Azure etc. Prior experience in managing and delivering end to end projects Outstanding written and verbal communication skills Able to work in fast pace continuously evolving environment and ready to take up uphill challenges Is able to understand cross cultural differences and can work with clients across the globe.
Posted 1 week ago
5.0 - 10.0 years
15 - 25 Lacs
Hyderabad
Work from Office
We are seeking a highly skilled Quality Engineer Data to ensure the reliability, accuracy, and performance of data pipelines and AI/ML models within our SmartFM platform . This role is critical to delivering trusted data and actionable insights that drive smart building optimization and operational efficiency. Key Responsibilities: Design and implement robust QA strategies for data pipelines , ML models , and agentic workflows . Test and validate data ingestion and streaming systems (e.g., StreamSets , Kafka ) for accuracy, completeness, and resilience. Ensure data integrity and schema validation within MongoDB and other data stores. Collaborate with Data Engineers to proactively identify and resolve data quality issues. Partner with Data Scientists to validate ML/DL/LLM model performance, fairness, and robustness. Automate testing processes using frameworks such as Pytest , Great Expectations , and Deepchecks . Monitor production pipelines for anomalies, data drift , and model degradation . Participate in code reviews , QA audits, and maintain comprehensive documentation of test plans and results. Continuously evaluate and improve QA processes based on industry best practices and emerging trends. Required Technical Skills: 510 years of QA experience with a focus on data validation and ML model testing . Strong command of SQL for complex data queries and integrity checks. Practical experience with StreamSets , Kafka , and MongoDB . Proficient in Python scripting for automation and testing. Familiarity with ML testing metrics , model validation techniques, and bias detection . Exposure to cloud platforms such as Azure , AWS , or GCP . Working knowledge of QA tools like Pytest , Great Expectations , and Deepchecks . Understanding of Node.js and React-based applications is an added advantage. Additional Qualifications: Excellent communication , documentation , and cross-functional collaboration skills. Strong analytical mindset and high attention to detail. Ability to work with cross-disciplinary teams including Engineering, Data Science, and Product. Passion for continuous learning and adoption of new QA tools and methodologies. Domain knowledge in facility management , IoT , or building automation systems is a strong plus.
Posted 1 week ago
5.0 - 9.0 years
15 - 25 Lacs
Pune, Chennai, Bengaluru
Hybrid
Databricks Developer Primary Skill : Azure data factory, Azure databricks Secondary Skill: SQL,,Sqoop,Hadoop Experience: 5 to 9 years Location: Chennai, Bangalore ,Pune, Coimbatore Requirements: Cloud certified in one of these categories Azure Data Engineer Azure Data Factory , Azure Data bricks Spark (PySpark or scala), SQL, DATA Ingestion, Curation Semantic Modelling/ Optimization of data model to work within Rahona Experience in Azure ingestion from on-prem source, e.g. mainframe, SQL server, Oracle. Experience in Sqoop / Hadoop Microsoft Excel (for metadata files with requirements for ingestion) Any other certificate in Azure/AWS/GCP and data engineering hands-on experience in cloud Strong Programming skills with at least one of Python, Scala, or Java
Posted 1 week ago
10.0 - 16.0 years
35 - 100 Lacs
Mumbai
Work from Office
Job Summary As an ATS (Account Technology Specialist) in NetApp’s Sales function, you will utilize strong customer handling and technical competencies to set objectives and execute plans for winning sales campaigns. This challenging and high-visibility position provides a huge opportunity to grow in your career and cover the largest account base in the region. You develop long-term strategies and shorter-term plans to meet aggressive performance goals with the channel partners and internal stakeholders, including the Client Executive and the District Manager. You must be extremely results-driven, customer-focused, tech-savvy, and skilled at building internal relationships and external partnerships. Essential Functions Provide technical oversight to channel partners and customers within the territory to drive all pertinent issues, sales campaigns, and goal attainment. Work towards meeting the target along with the client executive for the territory by devising short-term goals and long-term strategies for the assigned accounts. Evangelise NetApp’s proposition in the assigned territory. Drive technical closures for any sales campaign, positioning NetApp as the most viable solution for prospective customers. Job Requirements Excellent verbal and written communication skills, including presentation skills. Proven experience in presales, designing, and proposing technical solutions. Excellent presentation, relationship building, and negotiating skills. Ability to work collaboratively with functional peers across functions, including Marketing, Sales, Sales Operations, Customer Support, and Product Development. Strong understanding of Data storage, Data Protection, Disaster recovery, and competitive offerings in the marketplace. Understanding of Cloud technologies is highly desirable. Ability to convey and analyze information clearly as needed to help customers make buying decisions. An excellent understanding of how technology products and solutions solve business problems. The ability to hold key technical decision makers and CXO relationships within major accounts in the territory assigned. Education At least 15 years of experience in Technical presales. A Bachelor of Science Degree in Engineering, Computer Science, or related field is preferred; a Graduate Degree is mandatory.
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
vadodara, gujarat
On-site
As a Data Solutions Developer, you will collaborate closely with the Head of Data to create and implement scalable data solutions that facilitate the consolidation of company-wide data into a controlled data warehouse environment, potentially utilizing Azure Data Lake and SQL Server. Your primary focus will be on constructing efficient ELT pipelines, APIs, and database architectures to facilitate the high-speed ingestion and storage of data from various sources into a unified repository. This repository will act as the foundation for business reporting and operational decision-making processes. Your responsibilities will include designing and implementing scalable database and data warehouse solutions, establishing and managing data ingestion pipelines for both batch and real-time integration, defining and maintaining relational schema and metadata frameworks, and supporting the development of a centralized data repository for unified reporting and analysis. Furthermore, you will optimize database structures and queries for performance and scalability, while also documenting and version controlling all data pipeline and architecture components. In addition to your focus on centralizing data infrastructure, you will address ad hoc data solution requirements to ensure the continuity of business operations. You will develop and maintain robust APIs for internal and external data exchange, facilitate efficient data flow between operational systems and reporting platforms, and support system integrations with various tools such as CRM, finance, operations, and marketing. You will collaborate with IT and compliance personnel to enforce access control, encryption, and PII protection across all data solutions, as well as ensure compliance with data protection regulations such as GDPR. Promoting and upholding data quality, governance, and security standards will be an integral part of your role. Acting as a subject matter expert, you will provide guidance on best practices in data engineering and cloud architecture, offer ongoing support to internal teams on performance optimization and scalable solution design, and take on DBA responsibilities for the SBR database. To be successful in this role, you should possess at least 5 years of experience as a SQL Developer or in a similar role, demonstrate a strong understanding of system design and software architecture, and be capable of designing and building performant and scalable data infrastructure solutions. Proficiency in SQL and its variations among popular databases, expertise in optimizing complex SQL statements, and a solid grasp of API knowledge and data ingestion pipeline creation are essential skills for this position.,
Posted 1 week ago
7.0 - 12.0 years
12 - 18 Lacs
Pune, Bengaluru
Hybrid
> Strong programming expertise in PySpark and Python. > Solid understanding of Spark internals, DAG optimization, partitioning, broadcast joins, etc. > Hands-on experience with one or more cloud platforms > Experience with API integrations Required Candidate profile The ideal candidate has strong expertise in PySpark optimization, API integration, and big data ingestion using AWS, GCP, or Azure. A solid foundation in SQL.
Posted 1 week ago
3.0 - 8.0 years
15 - 27 Lacs
Pune, Bengaluru
Work from Office
Velotio Technologies is a product engineering company working with innovative startups and enterprises. We have provided full-stack product development for 110+ startups across the globe, building products in the cloud-native, data engineering, B2B SaaS, IoT & Machine Learning space. Our team of 400+ elite software engineers solves hard technical problems while transforming customer ideas into successful products. Requirements Implement a cloud-native analytics platform with high performance and scalability Build an API-first infrastructure for data in and data out Build data ingestion capabilities for internal data, as well as external spend data. Leverage data classification AI algorithms to cleanse and harmonize data Own data modelling, microservice orchestration, monitoring & alerting Build solid expertise in the entire application suite and leverage this knowledge to better design application and data frameworks. Adhere to iterative development processes to deliver concrete value each release while driving longer-term technical vision. Engage with cross-organizational teams such as Product Management, Integrations, Services, Support, and Operations, to ensure the success of overall software development, implementation, and deployment. What you will bring: Bachelors degree in computer science, information systems, computer engineering, systems analysis or a related discipline, or equivalent work experience. More than 4 - 8 years of experience building enterprise, SaaS web applications using one or more of the following modern frameworks technologies: Java/ .Net/C etc. Exposure to Python & Familiarity with AI/ML-based data cleansing, deduplication and entity resolution techniques Familiarity with a MVC framework such as Django or Rails Full stack web development experience with hands-on experience building responsive UI, Single Page Applications, reusable components, with a keen eye for UI design and usability. Understanding of micro services and event driven architecture Strong knowledge of APIs, and integration with the backend Experience with relational SQL and NoSQL databases such MySQL / PostgreSQL / AWS Aurora / Cassandra Proven expertise in Performance Optimization and Monitoring Tools. Strong knowledge of Cloud Platforms (e.g., AWS, Azure, or GCP) Experience with CI/CD Tooling and software delivery and bundling mechanisms. Note- We need to fill this position soon. Please apply if your notice is less than 15 days.
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
At Guidewire, we take pride in supporting our customers" mission to safeguard the world's most valuable investments. Insurance plays a crucial role in protecting our homes, businesses, and other assets, providing aid in times of need caused by natural disasters or accidents. Our goal is to provide a platform that enables Property and Casualty (P&C) insurers to offer the necessary products and services for individuals to recover from life's most challenging events. We are seeking a product management professional to join our Analytics and Data Services (ADS) team at Guidewire. The ADS team is dedicated to defining and designing new capabilities for the insurance market through our cutting-edge software solutions. In this role, you will collaborate with a diverse team of 50 engineers, data scientists, and risk modelers to create a dynamic cyber insurance data and analytics product suite. This suite will leverage data and machine learning to address various use cases, including cyber risk underwriting, pricing, enterprise risk management, and cyber threat assessment, which is identified as the #1 risk to US national security. Reporting to the Cyence Product Management team, you will play a vital role in driving innovation within an entrepreneurial culture. You will thrive in an environment where our core values of Integrity, Rationality, and Collegiality are ingrained in our daily operations. As a potential candidate, you should have a background in software, data, and analytics, along with experience working in a fast-paced environment involving multiple teams across different locations. You must be a problem-solver who is enthusiastic about developing top-notch products and overcoming market challenges. Your attention to detail, ability to motivate others, and collaboration skills will be key to supporting various teams, including Platform, UX, modeling, ML, Data Science, Quality Assurance, and GTM. Your responsibilities will include: - Vision: Envisioning innovative solutions, promoting our cyber vision, and driving breakthroughs to simplify complexity. - Technical Mastery: Collaborating with software and data teams, implementing best practices in product management, and owning the end-to-end requirements documentation process. - Product Leadership: Cultivating a culture of curiosity and craftsmanship, inspiring and developing R&D teams, and establishing product goals that drive motivation. - Execution: Achieving business outcomes through forward-thinking products, contributing to the creation and communication of roadmaps, and building trust through transparency and consistent delivery. Qualifications we are looking for: - Minimum 3 years of experience as a product manager, demonstrating a track record of delivering complex team projects on-time and with high quality. - 3+ years of experience in technical data management, integration architecture of cloud solutions, and security. - Strong desire to address complex insurance challenges using a B2B SaaS product model. - Excellent attention to detail and communication skills. - Proactive, focused, and quick to take ownership of tasks. - Familiarity with tools such as Aha, Atlassian suite, databases, prototyping tools, and technical knowledge of big data and cloud technologies is preferred. - Conceptual understanding of microservices, distributed systems, AWS, and the Big Data ecosystem. - Comfortable with data ingestion, cataloging, integration, and enrichment concepts. - Previous experience with B2B SaaS companies and software engineering is advantageous. - Bachelor's or Master's degree in engineering, analytics, mathematics, or software development. - Ability to overlap at least 2 hours with US time zones 3-4 days a week. About Guidewire: Guidewire is the trusted platform for P&C insurers to engage, innovate, and grow efficiently. Our platform integrates digital, core, analytics, and AI services, delivered as a cloud service. With over 540 insurers in 40 countries relying on Guidewire, we support new ventures as well as the largest and most complex organizations worldwide. As a partner to our customers, we continuously evolve to ensure their success. With a remarkable implementation track record of over 1600 successful projects, supported by the industry's largest R&D team and partner ecosystem, we are dedicated to accelerating integration, localization, and innovation through our Marketplace. For more information, please visit www.guidewire.com and follow us on Twitter: @Guidewire_PandC.,
Posted 2 weeks ago
6.0 - 10.0 years
0 Lacs
hyderabad, telangana
On-site
You should have 6-10 years of hands-on experience in Java development, focusing on building robust data processing components. Your proficiency should include working with Google Cloud Pub/Sub or similar streaming platforms like Kafka. You must be skilled in JSON schema design, data serialization, and handling structured data formats. As an experienced individual, you should be capable of designing BigQuery views optimized for performance, scalability, and ease of consumption. Your responsibilities will include enhancing and maintaining Java-based adapters to publish transactional data from the Optimus system to Google Pub/Sub. Implementing and managing JSON schemas for smooth and accurate data ingestion into BigQuery will also be part of your role. Collaboration with cross-functional teams is essential to ensure that data models are structured to support high-performance queries and business usability. Strong communication and teamwork skills are required, along with the ability to align technical solutions with stakeholder requirements. Additionally, you will contribute to continuous improvements in data architecture and integration practices. The job location is Hyderabad/Bangalore.,
Posted 2 weeks ago
1.0 - 6.0 years
0 Lacs
pune, maharashtra
On-site
As a SF Data Cloud Developer with 3-6 years of experience, you will be responsible for designing and implementing 1st and 3rd-party data ingestion from various sources such as web, CRM, mobile apps, and media platforms. You should have a strong understanding of Data Streams, Data Model Objects (DMOs), Calculated Insights, and Segmentation within Salesforce Data Cloud. It is preferable to have a Pharma background or familiarity with privacy frameworks like HIPAA. Additionally, being a SF Data Cloud Consultant Certified would be advantageous. Your role will involve designing and implementing Calculated Insights to derive metrics for personalization and segmentation. You should have an understanding of the customer lifecycle and journey orchestration, utilizing unified profiles to enhance marketing outcomes. Troubleshooting issues related to data mapping, identity stitching, or segment activation flows will also be part of your responsibilities. As a SF Data Cloud professional with 1-3 years of experience, you will be expected to have hands-on experience with SF Data Cloud. A SF certification would be preferable for this role. You should possess an understanding of Data Streams and models within SF Data Cloud. Previous experience in integrating downstream systems would be an added advantage. Overall, both positions require a deep understanding of Salesforce Data Cloud functionalities and the ability to work effectively with data ingestion, segmentation, and insights generation.,
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough