Jobs
Interviews

322 Data Ingestion Jobs - Page 12

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 8.0 years

5 - 10 Lacs

Bengaluru

Work from Office

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Microsoft Azure Data Services Good to have skills : NA Minimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. You will be responsible for ensuring the smooth functioning of applications and providing solutions to work-related problems. A typical day in this role involves collaborating with team members, analyzing business requirements, and developing and implementing application solutions. You will also actively participate in team discussions and contribute to providing solutions to work-related problems, becoming a subject matter expert in your field. Roles & Responsibilities: Expected to perform independently and become an SME. Required active participation/contribution in team discussions. Contribute in providing solutions to work-related problems. Collaborate with team members to analyze business requirements. Design and develop applications based on business process and application requirements. Configure applications to ensure smooth functioning and optimal performance. Troubleshoot and debug application issues to ensure proper functionality. Collaborate with cross-functional teams to integrate applications with other systems. Stay updated with emerging technologies and industry trends to enhance application development processes. Professional & Technical Skills: Must To Have Skills:Proficiency in Microsoft Azure Data Services. Good To Have Skills:Experience with cloud-based application development. Strong understanding of data storage and management in Azure. Experience with Azure SQL Database, Azure Cosmos DB, and Azure Data Lake Storage. Hands-on experience with Azure Data Factory and Azure Databricks. Knowledge of Azure Functions and Azure Logic Apps for application integration. Additional Information: The candidate should have a minimum of 3 years of experience in Microsoft Azure Data Services. This position is based at our Bengaluru office. A 15 years full-time education is required. Qualifications 15 years full time education

Posted 2 months ago

Apply

5.0 - 10.0 years

7 - 12 Lacs

Hyderabad

Work from Office

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Google BigQuery Good to have skills : No Function Specialty Minimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. You will be responsible for creating efficient and scalable solutions using Google BigQuery. Your typical day will involve collaborating with the team, analyzing business requirements, designing and implementing application features, and ensuring the applications meet quality standards and performance goals. Roles & Responsibilities:1. Design, create, code, and support a variety of data pipelines and models on GCP cloud technology 2. Strong hand-on exposure to GCP services like BigQuery, Composer etc.3. Partner with business/data analysts, architects, and other key project stakeholders to deliver data requirements.4. Developing data integration and ETL (Extract, Transform, Load) processes.5. Support existing Data warehouses & related pipelines.6. Ensuring data quality, security, and compliance.7. Optimizing data processing and storage efficiency, troubleshoot issues in Data space.8. Seeks to learn new skills/tools utilized in Data space (ex:dbt, MonteCarlo etc.)9. Excellent communication skills- verbal and written, Excellent analytical skills with Agile mindset.10. Demonstrates strong affinity towards paying attention to details and delivery accuracy.11. Self-motivated team player and should have ability to overcome challenges and achieve desired results.12. Work effectively in Global distributed environment. Professional & Technical Skills:Skill Proficiency Expectation:Expert:Data Storage, BigQuery,SQL,Composer,Data Warehousing ConceptsIntermidate Level:PythonBasic Level/Preferred:DB,Kafka, Pub/Sub Must To Have Skills:Proficiency in Google BigQuery. Strong understanding of statistical analysis and machine learning algorithms. Experience with data visualization tools such as Tableau or Power BI. Hands-on implementing various machine learning algorithms such as linear regression, logistic regression, decision trees, and clustering algorithms. Solid grasp of data munging techniques, including data cleaning, transformation, and normalization to ensure data quality and integrity. Additional Information: The candidate should have a minimum of 5 years of experience in Google BigQuery. This position is based at our Hyderabad office. A 15 years full time education is required. Qualifications 15 years full time education

Posted 2 months ago

Apply

5.0 - 10.0 years

14 - 17 Lacs

Pune

Work from Office

As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs. Your primary responsibilities include: Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements. Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization. Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Must have 5+ years exp in Big Data -Hadoop Spark -Scala ,Python Hbase, Hive Good to have Aws -S3, athena ,Dynomo DB, Lambda, Jenkins GIT Developed Python and pyspark programs for data analysis. Good working experience with python to develop Custom Framework for generating of rules (just like rules engine). Developed Python code to gather the data from HBase and designs the solution to implement using Pyspark. Apache Spark DataFrames/RDD's were used to apply business transformations and utilized Hive Context objects to perform read/write operations Preferred technical and professional experience Understanding of Devops. Experience in building scalable end-to-end data ingestion and processing solutions Experience with object-oriented and/or functional programming languages, such as Python, Java and Scala

Posted 2 months ago

Apply

4.0 - 9.0 years

6 - 10 Lacs

Hyderabad

Work from Office

- Building and operationalizing large scale enterprise data solutions and applications using one or more of AZURE data and analytics services in combination with custom solutions - Azure Synapse/Azure SQL DWH, Azure Data Lake, Azure Blob Storage, Spark, HDInsights, Databricks, CosmosDB, EventHub/IOTHub. - Experience in migrating on-premise data warehouses to data platforms on AZURE cloud. - Designing and implementing data engineering, ingestion, and transformation functions - Azure Synapse or Azure SQL data warehouse - Spark on Azure is available in HD insights and data bricks - Good customer communication. - Good Analytical skill

Posted 2 months ago

Apply

3.0 - 8.0 years

3 - 7 Lacs

Hyderabad

Work from Office

Azure Data Factory: - Develop Azure Data Factory Objects - ADF pipeline, configuration, parameters, variables, Integration services runtime - Hands-on knowledge of ADF activities(such as Copy, SP, lkp etc) and DataFlows - ADF data Ingestion and Integration with other services Azure Databricks: - Experience in Big Data components such as Kafka, Spark SQL, Dataframes, HIVE DB etc implemented using Azure Data Bricks would be preferred. - Azure Databricks integration with other services - Read and write data in Azure Databricks - Best practices in Azure Databricks Synapse Analytics: - Import data into Azure Synapse Analytics with and without using PolyBase - Implement a Data Warehouse with Azure Synapse Analytics - Query data in Azure Synapse Analytics

Posted 2 months ago

Apply

1.0 - 5.0 years

6 - 10 Lacs

Bengaluru

Work from Office

Job TitleData Engineer Experience5"“8 Years LocationDelhi, Pune, Bangalore (Hyderabad & Chennai also acceptable) Time ZoneAligned with UK Time Zone Notice PeriodImmediate Joiners Only Role Overview: We are seeking experienced Data Engineers to design, develop, and optimize large-scale data processing systems You will play a key role in building scalable, efficient, and reliable data pipelines in a cloud-native environment, leveraging your expertise in GCP, BigQuery, Dataflow, Dataproc, and more Key Responsibilities: Design, build, and manage scalable and reliable data pipelines for real-time and batch processing. Implement robust data processing solutions using GCP services and open-source technologies. Create efficient data models and write high-performance analytics queries. Optimize pipelines for performance, scalability, and cost-efficiency. Collaborate with data scientists, analysts, and engineering teams to ensure smooth data integration and transformation. Maintain high data quality, enforce validation rules, and set up monitoring and alerting. Participate in code reviews, deployment activities, and provide production support. Technical Skills Required: Cloud PlatformsGCP (Google Cloud Platform)- mandatory Key GCP ServicesDataproc, BigQuery, Dataflow Programming LanguagesPython, Java, PySpark Data Engineering ConceptsData Ingestion, Change Data Capture (CDC), ETL/ELT pipeline design Strong understanding of distributed computing, data structures, and performance tuning Required Qualifications & Attributes: 5"“8 years of hands-on experience in data engineering roles Proficiency in building and optimizing distributed data pipelines Solid grasp of data governance and security best practices in cloud environments Strong analytical and problem-solving skills Effective verbal and written communication skills Proven ability to work independently and in cross-functional teams Show more Show less

Posted 2 months ago

Apply

5.0 - 7.0 years

7 - 15 Lacs

Hyderabad, Mumbai (All Areas)

Work from Office

Job Description: We are seeking a skilled and innovative AI/ML Engineer to design, build, and deploy machine learning solutions that solve complex business challenges in mission-critical industries. You will work with multidisciplinary teams to apply AI, ML, and deep learning techniques to large-scale datasets and real-time operational systems. Key Responsibilities: Design and implement end-to-end ML pipelines: data ingestion, model training, evaluation, and deployment Build predictive and prescriptive models using structured, unstructured, and real-time data Develop and fine-tune deep learning models for NLP, computer vision, or time series forecasting Integrate ML models into enterprise platforms, APIs, and dashboards Work closely with domain experts, data engineers, and DevOps teams to ensure production-grade performance Conduct model validation, bias testing, and post-deployment monitoring Document workflows, architecture, and results for reproducibility and audits Research and evaluate new AI tools, techniques, and trends Required Skills & Qualifications: Bachelor's or Masters degree in Computer Science, Data Science, Engineering, or related field 5-7 years of hands-on experience in developing and deploying ML models Proficiency in Python and ML libraries (scikit-learn, TensorFlow, PyTorch, XGBoost) Experience with NLP (BERT, spaCy) or CV frameworks (OpenCV, YOLO, MMDetection) Familiarity with data processing tools (Pandas, Dask, Apache Spark) Proficient in SQL and experience with time-series or sensor data Knowledge of ML Ops practices and tools (Docker, MLflow, Airflow, Kubernetes) Industry experience in Oil & Gas, Power Systems, or Urban Analytics Hands-on with edge AI (Jetson, Coral), GPU compute, and model optimization Cloud services: AWS SageMaker, Azure ML, or Google AI Platform Familiarity with REST APIs, OPC-UA integration, or SCADA data sources Knowledge of responsible AI practices (explainability, fairness, privacy).

Posted 2 months ago

Apply

7.0 - 10.0 years

2 - 6 Lacs

Pune

Work from Office

Responsibilities : - Design, develop, and deploy data pipelines using Databricks, including data ingestion, transformation, and loading (ETL) processes. - Develop and maintain high-quality, scalable, and maintainable Databricks notebooks using Python. - Work with Delta Lake and other advanced features. - Leverage Unity Catalog for data governance, access control, and data discovery. - Develop and optimize data pipelines for performance and cost-effectiveness. - Integrate with various data sources, including but not limited to databases and cloud storage (Azure Blob Storage, ADLS, Synapse), and APIs. - Experience working with Parquet files for data storage and processing. - Experience with data integration from Azure Data Factory, Azure Data Lake, and other relevant Azure services. - Perform data quality checks and validation to ensure data accuracy and integrity. - Troubleshoot and resolve data pipeline issues effectively. - Collaborate with data analysts, business analysts, and business stakeholders to understand their data needs and translate them into technical solutions. - Participate in code reviews and contribute to best practices within the team.

Posted 2 months ago

Apply

8.0 - 11.0 years

35 - 37 Lacs

Kolkata, Ahmedabad, Bengaluru

Work from Office

Dear Candidate, We are hiring a Data Platform Engineer to build scalable infrastructure for data ingestion, processing, and analysis. Key Responsibilities: Architect distributed data systems. Enable data discoverability and quality. Develop data tooling and platform APIs. Required Skills & Qualifications: Experience with Spark, Kafka, and Delta Lake. Proficiency in Python, Scala, or Java. Familiar with cloud-based data platforms. Soft Skills: Strong troubleshooting and problem-solving skills. Ability to work independently and in a team. Excellent communication and documentation skills. Note: If interested, please share your updated resume and preferred time for a discussion. If shortlisted, our HR team will contact you. Kandi Srinivasa Reddy Delivery Manager Integra Technologies

Posted 2 months ago

Apply

2.0 - 5.0 years

14 - 17 Lacs

Mumbai

Work from Office

As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs. Your primary responsibilities include Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements. Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization. Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too, Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Must have 5+ years exp in Big Data -Hadoop Spark -Scala ,Python Hbase, Hive Good to have Aws -S3, athena ,Dynomo DB, Lambda, Jenkins GIT Developed Python and pyspark programs for data analysis. Good working experience with python to develop Custom Framework for generating of rules (just like rules engine). Developed Python code to gather the data from HBase and designs the solution to implement using Pyspark. Apache Spark DataFrames/RDD's were used to apply business transformations and utilized Hive Context objects to perform read/write operations, Preferred technical and professional experience Understanding of Devops. Experience in building scalable end-to-end data ingestion and processing solutions Experience with object-oriented and/or functional programming languages, such as Python, Java and Scala

Posted 2 months ago

Apply

6.0 - 9.0 years

27 - 42 Lacs

Kochi

Work from Office

Skill: - Databricks Experience: 5 to 14 years Location: - Kochi (Walk in on 14th June) Design, develop, and maintain scalable and efficient data pipelines using Azure Databricks platform. Have work experience in Databricks Unity catalog – Collaborate with data scientists and analysts to integrate machine learning models into production pipelines. – Implement data quality checks and ensure data integrity throughout the data ingestion and transformation processes. – Optimize cluster performance and scalability to handle large volumes of data processing. – Troubleshoot and resolve issues related to data pipelines, clusters, and data processing jobs. – Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions – Conduct performance tuning and optimization for Spark jobs on Azure Databricks. – Provide technical guidance and mentorship to junior data engineers.

Posted 2 months ago

Apply

5.0 - 8.0 years

9 - 13 Lacs

Pune

Hybrid

So, what’s the role all about? We are looking for a highly driven and technically skilled Software Engineer to lead the integration of various Content Management Systems with AWS Knowledge Hub, enabling advanced Retrieval-Augmented Generation (RAG) search across heterogeneous customer data—without requiring data duplication. This role will also be responsible for expanding the scope of Knowledge Hub to support non-traditional knowledge items and enhance customer self-service capabilities. You will work at the intersection of AI, search infrastructure, and developer experience to make enterprise knowledge instantly accessible, actionable, and AI-ready. How will you make an impact? Integrate CMS with AWS Knowledge Hub to allow seamless RAG-based search across diverse data types—eliminating the need to copy data into Knowledge Hub instances. Extend Knowledge Hub capabilities to ingest and index non-knowledge assets, including structured data, documents, tickets, logs, and other enterprise sources. Build secure, scalable connectors to read directly from customer-maintained indices and data repositories. Enable self-service capabilities for customers to manage content sources using App Flow, Tray.ai, configure ingestion rules, and set up search parameters independently. Collaborate with the NLP/AI team to optimize relevance and performance for RAG search pipelines. Work closely with product and UX teams to design intuitive, powerful experiences around self-service data onboarding and search configuration. Implement data governance, access control, and observability features to ensure enterprise readiness. Have you got what it takes? Proven experience with search infrastructure, RAG pipelines, and LLM-based applications. 5+ Years’ hands-on experience with AWS Knowledge Hub, AppFlow, Tray.ai, or equivalent cloud-based indexing/search platforms. Strong backend development skills (Python, Typescript/NodeJS, .NET/Java) and familiarity with building and consuming REST APIs. Infrastructure as a code (IAAS) service like AWS Cloud formation, CDK knowledge Deep understanding of data ingestion pipelines, index management, and search query optimization. Experience working with unstructured and semi-structured data in real-world enterprise settings. Ability to design for scale, security, and multi-tenant environment. What’s in it for you? Join an ever-growing, market disrupting, global company where the teams – comprised of the best of the best – work in a fast-paced, collaborative, and creative environment! As the market leader, every day at NICE is a chance to learn and grow, and there are endless internal career opportunities across multiple roles, disciplines, domains, and locations. If you are passionate, innovative, and excited to constantly raise the bar, you may just be our next NICEr! Enjoy NICE-FLEX! At NICE, we work according to the NICE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere. Reporting into: Tech Manager, Engineering, CX Role Type: Individual Contributor

Posted 2 months ago

Apply

3.0 - 8.0 years

6 - 12 Lacs

Kolkata

Work from Office

Job Title: AI/ML Data Engineer Location: Kolkata, India Experience: 3+ Years Industry: IT / AI & Data Analytics Job Summary: We are hiring an experienced AI/ML Data Engineer to design and build scalable data pipelines and ETL processes to support analytics and machine learning projects. The ideal candidate will have strong Python and SQL skills, hands-on experience with tools like Apache Airflow , Kafka , and working knowledge of cloud platforms (AWS, GCP, or Azure) . A strong understanding of data transformation, feature engineering, and data automation is essential. Key Skills Required: ETL & Data Pipeline Development Python & SQL Programming Apache Airflow / Kafka / Spark / Hadoop Cloud Platforms: AWS / GCP / Azure Data Cleaning & Feature Engineering Strong Problem-Solving & Business Understanding Preferred Profile: Candidates with a B.Tech / M.Tech / MCA in Computer Science or Data Engineering, and 3+ years of hands-on experience in building data solutions, who can work closely with cross-functional teams and support AI/ML initiatives.

Posted 2 months ago

Apply

3.0 - 4.0 years

3 - 6 Lacs

Gujarat

Hybrid

Job Type: Contract Duration: 6 Months Work Type: Remote Job Description: Key Responsibilities: Design, develop, and maintain serverless applications using Python and serverless frameworks. Implement and optimize serverless functions to ensure scalability and efficiency. Work on event-driven programming patterns to create responsive and real-time solutions. Collaborate with cross-functional teams to understand project requirements and deliver robust solutions. Utilize AWS services, primarily S3 and DynamoDB, for data storage, retrieval, and processing. Write and maintain comprehensive unit and integration tests using pytest to ensure code quality and reliability. Troubleshoot and debug issues to optimize performance and functionality. Stay updated on emerging trends and best practices in serverless computing and Python development. Required Skills & Qualifications: Minimum 3 years of hands-on experience in Python programming. In-depth knowledge of serverless frameworks and developing serverless functions. Strong experience with event-driven programming and designing event-based workflows. Proficient in writing unit test cases and integration tests using pytest. Hands-on experience with AWS services, especially S3 and DynamoDB. Strong problem-solving skills and ability to work independently or in a team environment. Preferred Skills (Nice to Have): Familiarity with CI/CD pipelines and DevOps processes. Knowledge of other AWS services beyond S3 and DynamoDB. Experience with other serverless computing tools and frameworks. Years of Experience: 3+ year of relevant work experience with a reputed organization. Educational Qualification: ME (IT, Computer), BE (IT, Computer), MCA, MSC-IT, BCA

Posted 2 months ago

Apply

6.0 - 10.0 years

22 - 27 Lacs

Chennai, Bengaluru

Hybrid

We are seeking a skilled Splunk Engineer to join our team. The ideal candidate will have strong expertise in Splunk development technologies and practices, as well as experience in system monitoring, incident management, and mentoring. This role requires a deep understanding of Splunk infrastructure components and a solid background in software engineering and security practices. Key Responsibilities: Develop and maintain Splunk services and platforms to ensure availability and health. Participate in end-to-end system design and delivery. Manage incidents, problems, and defects, applying fixes and resolving systematic issues. Mentor and guide other engineers within the team. Onboard applications in Splunk, involving log ingestion, database queries, and transaction stitching. Create and manage Splunk dashboards and alerts. Utilize ITSI and Splunk data ingestion patterns like DBX, JMS-MQ, UF, files, HEC, etc. Administer Splunk infra components such as indexers, universal forwarders, heavy forwarders, search head clusters, cluster master, deployment servers, etc. Provide support for Splunk platforms, including problem and incident management. Use MongoDB and Elastic Search for data management. Utilize programming skills in CSS, JavaScript, Java, Python scripting, and Regex. Implement CI/CD tools such as GIT, BitBucket, Bamboo, Artifactory, and Ansible. Qualifications: Bachelor's degree in Computer Science, Information Technology, or a related field. 6-10 years of experience in Splunk engineering and related technologies. Proficiency in Splunk infrastructure, data ingestion, and dashboard creation. Strong problem-solving and analytical skills. Excellent communication and mentoring abilities. Exposure to New Relic is an added advantage.

Posted 2 months ago

Apply

8.0 - 13.0 years

15 - 30 Lacs

Pune

Work from Office

Location: Magarpatta, Pune Job Type: Full-time / Contract Experience Level: 8+ years in Salesforce ecosystem with 2+ years in Data Cloud (formerly Customer Data Platform) Industry: IT Services / Consulting Reports To: Head of Salesforce Practice / CTO Job Summary: We are seeking a highly skilled Salesforce Data Cloud Architect to lead the design, implementation, and optimization of Salesforce Data Cloud solutions. The ideal candidate will have a deep understanding of customer data platforms (CDPs), data modeling, ingestion pipelines, identity resolution, and real-time personalization strategies. Youll work closely with stakeholders to unify customer data across touchpoints and drive actionable insights. Key Responsibilities: Design scalable, high-performance Salesforce Data Cloud architecture aligned with business goals. Lead the implementation of data ingestion, transformation, identity resolution, and activation workflows. Collaborate with business and technical teams to gather requirements and translate them into Data Cloud solutions. Define and maintain data models, unification rules, calculated insights, and segmentation strategies. Integrate external data sources (e.g., AWS S3, CRM, POS, Web, Mobile) into Salesforce Data Cloud using connectors and custom integrations. Ensure governance, security, and compliance of customer data within the platform. Create documentation, playbooks, and reusable assets for future projects. Mentor developers and admins on Data Cloud best practices and features. Work closely with Salesforce AE/CSM and support teams to resolve complex issues. Required Qualifications: 8+ years in Salesforce ecosystem including Sales, Service, or Marketing Cloud. 2+ years of hands-on experience with Salesforce Data Cloud / Customer Data Platform (CDP). Strong understanding of data architecture, ETL, customer identity resolution, and segmentation strategies. Experience with Salesforce tools like Data Stream, Data Lake Objects, Calculated Insights, and Activation Targets. Familiarity with data integration tools such as Mulesoft, Informatica, or AWS Glue. Salesforce certifications preferred: Data Cloud Consultant , Integration Architect , or Application Architect . Excellent communication, stakeholder management, and documentation skills. Preferred Qualifications: Experience with AI/ML in customer analytics or real-time personalization. Familiarity with marketing platforms (e.g., Salesforce Marketing Cloud, Adobe, Braze). Experience working in B2B/B2C industries like retail, telecom, financial services, or healthcare. What We Offer: Opportunity to work on cutting-edge Salesforce Data Cloud projects. Collaborative work environment with a team of top-tier Salesforce professionals. Access to learning resources and certification support. Competitive compensation and performance incentives.

Posted 2 months ago

Apply

2.0 - 6.0 years

13 - 17 Lacs

Mumbai

Work from Office

At Siemens Energy, we can. Our technology is key, but our people make the difference. Brilliant minds innovate. They connect, create, and keep us on track towards changing the world's energy systems. Their spirit fuels our mission. Our culture is defined by caring, agile, respectful, and accountable individuals. We value excellence of any kind. Sounds like you? Software Developer - Data Integration Platform- Mumbai or Pune , Siemens Energy, Full Time Looking for challenging role? If you really want to make a difference - make it with us We make real what matters. About the role Technical Skills (Mandatory) Python (Data Ingestion Pipelines) Proficiency in building and maintaining data ingestion pipelines using Python. Blazegraph Experience with Blazegraph technology. Neptune Familiarity with Amazon Neptune, a fully managed graph database service. Knowledge Graph (RDF, Triple) Understanding of RDF (Resource Description Framework) and Triple stores for knowledge graph management. AWS Environment (S3) Experience working with AWS services, particularly S3 for storage solutions. GIT Proficiency in using Git for version control. Optional and good to have skills Azure DevOps (Optional)Experience with Azure DevOps for CI/CD pipelines and project management (optional but preferred). Metaphactory by Metaphacts (Very Optional)Familiarity with Metaphactory, a platform for knowledge graph management (very optional). LLM / Machine Learning ExperienceExperience with Large Language Models (LLM) and machine learning techniques. Big Data Solutions (Optional)Experience with big data solutions is a plus. SnapLogic / Alteryx / ETL Know-How (Optional)Familiarity with ETL tools like SnapLogic or Alteryx is optional but beneficial. We don't need superheroes, just super minds. A degree in Computer Science, Engineering, or a related field is preferred. Professional Software DevelopmentDemonstrated experience in professional software development practices. Years of Experience3-5 years of relevant experience in software development and related technologies. Soft Skills Strong problem-solving skills. Excellent communication and teamwork abilities. Ability to work in a fast-paced and dynamic environment. Strong attention to detail and commitment to quality. Fluent in English (spoken and written) We've got quite a lot to offer. How about you? This role is based in Pune or Mumbai , where you'll get the chance to work with teams impacting entire cities, countries "“ and the shape of things to come. We're Siemens. A collection of over 379,000 minds building the future, one day at a time in over 200 countries. We're dedicated to equality, and we welcome applications that reflect the diversity of the communities we work in. All employment decisions at Siemens are based on qualifications, merit and business need. Bring your curiosity and imagination and help us shape tomorrow. Find out more about Siemens careers at: www.siemens.com/careers

Posted 2 months ago

Apply

7.0 - 10.0 years

1 - 5 Lacs

Pune

Work from Office

Responsibilities : - Design, develop, and deploy data pipelines using Databricks, including data ingestion, transformation, and loading (ETL) processes. - Develop and maintain high-quality, scalable, and maintainable Databricks notebooks using Python. - Work with Delta Lake and other advanced features. - Leverage Unity Catalog for data governance, access control, and data discovery. - Develop and optimize data pipelines for performance and cost-effectiveness. - Integrate with various data sources, including but not limited to databases and cloud storage (Azure Blob Storage, ADLS, Synapse), and APIs. - Experience working with Parquet files for data storage and processing. - Experience with data integration from Azure Data Factory, Azure Data Lake, and other relevant Azure services. - Perform data quality checks and validation to ensure data accuracy and integrity. - Troubleshoot and resolve data pipeline issues effectively. - Collaborate with data analysts, business analysts, and business stakeholders to understand their data needs and translate them into technical solutions. - Participate in code reviews and contribute to best practices within the team. Apply Insights Follow-up Save this job for future reference Did you find something suspiciousReport Here! Hide This Job Click here to hide this job for you. You can also choose to hide all the jobs from the recruiter.

Posted 2 months ago

Apply

13.0 - 23.0 years

25 - 35 Lacs

Hyderabad

Work from Office

Role : Snowflake Practice Lead / Architect / Solution Architect Exp : 13+ Years Work Location : Hyderabad Position Overview : We are seeking a highly skilled and experienced - Snowflake Practice Lead- to drive our data strategy, architecture, and implementation using Snowflake. This leadership role requires a deep understanding of Snowflake's cloud data platform, data engineering best practices, and enterprise data management. The ideal candidate will be responsible for defining best practices, leading a team of Snowflake professionals, and driving successful Snowflake implementations for clients. Key Responsibilities : Leadership & Strategy : - Define and drive the Snowflake practice strategy, roadmap, and best practices. - Act as the primary subject matter expert (SME) for Snowflake architecture, implementation, and optimization. - Collaborate with stakeholders to understand business needs and align data strategies accordingly. Technical Expertise & Solutioning : - Design and implement scalable, high-performance data architectures using - Snowflake- . - Develop best practices for data ingestion, transformation, modeling, and security- within Snowflake. - Guide clients on Snowflake migrations, ensuring a seamless transition from legacy systems. - Optimize - query performance, storage utilization, and cost efficiency- in Snowflake environments. Team Leadership & Mentorship : - Lead and mentor a team of Snowflake developers, data engineers, and architects. - Provide technical guidance, conduct code reviews, and establish best practices for Snowflake development. - Train internal teams and clients on Snowflake capabilities, features, and emerging trends. Client & Project Management : - Engage with clients to understand business needs and design tailored Snowflake solutions. - Lead - end-to-end Snowflake implementation projects , ensuring quality and timely delivery. - Work closely with - data scientists, analysts, and business stakeholders- to maximize data utilization. Required Skills & Experience : - 10+ years of experience- in data engineering, data architecture, or cloud data platforms. - 5+ years of hands-on experience with Snowflake in large-scale enterprise environments. - Strong expertise in SQL, performance tuning, and cloud-based data solutions. - Experience with ETL/ELT processes, data pipelines, and data integration tools- (e.g., Talend, Matillion, dbt, Informatica). - Proficiency in cloud platforms such as AWS, Azure, or GCP, particularly their integration with Snowflake. - Knowledge of data security, governance, and compliance best practices . - Strong leadership, communication, and client-facing skills. - Experience in migrating from traditional data warehouses (Oracle, Teradata, SQL Server) to Snowflake. - Familiarity with Python, Spark, or other big data technologies is a plus. Preferred Qualifications : - Snowflake SnowPro Certification- (e.g., SnowPro Core, Advanced Architect, Data Engineer). - Experience in building data lakes, data marts, and real-time analytics solutions- . - Hands-on experience with DevOps, CI/CD pipelines, and Infrastructure as Code (IaC)- in Snowflake environments. Why Join Us? - Opportunity to lead cutting-edge Snowflake implementations- in a dynamic, fast-growing environment. - Work with top-tier clients across industries, solving complex data challenges. - Continuous learning and growth opportunities in cloud data technologies. - Competitive compensation, benefits, and a collaborative work culture.

Posted 2 months ago

Apply

15 - 24 years

20 - 35 Lacs

Kochi, Chennai, Thiruvananthapuram

Work from Office

Roles and Responsibilities: Architecture & Infrastructure Design Architect scalable, resilient, and secure AI/ML infrastructure on AWS using services like EC2, SageMaker, Bedrock, VPC, RDS, DynamoDB, CloudWatch . Develop Infrastructure as Code (IaC) using Terraform , and automate deployments with CI/CD pipelines . Optimize cost and performance of cloud resources used for AI workloads. AI Project Leadership Translate business objectives into actionable AI strategies and solutions. Oversee the entire AI lifecycle from data ingestion, model training, and evaluation to deployment and monitoring. Drive roadmap planning, delivery timelines, and project success metrics. Model Development & Deployment Lead selection and development of AI/ML models, particularly for NLP, GenAI, and AIOps use cases . Implement frameworks for bias detection, explainability , and responsible AI . Enhance model performance through tuning and efficient resource utilization. Security & Compliance Ensure data privacy, security best practices, and compliance with IAM policies, encryption standards , and regulatory frameworks. Perform regular audits and vulnerability assessments to ensure system integrity. Team Leadership & Collaboration Lead and mentor a team of cloud engineers, ML practitioners, software developers, and data analysts. Promote cross-functional collaboration with business and technical stakeholders. Conduct technical reviews and ensure delivery of production-grade solutions. Monitoring & Maintenance Establish robust model monitoring , alerting , and feedback loops to detect drift and maintain model reliability. Ensure ongoing optimization of infrastructure and ML pipelines. Must-Have Skills: 10+ years of experience in IT with 4+ years in AI/ML leadership roles. Strong hands-on experience in AWS services : EC2, SageMaker, Bedrock, RDS, VPC, DynamoDB, CloudWatch. Expertise in Python for ML development and automation. Solid understanding of Terraform, Docker, Git , and CI/CD pipelines . Proven track record in delivering AI/ML projects into production environments . Deep understanding of MLOps, model versioning, monitoring , and retraining pipelines . Experience in implementing Responsible AI practices – including fairness, explainability, and bias mitigation. Knowledge of cloud security best practices and IAM role configuration. Excellent leadership, communication, and stakeholder management skills. Good-to-Have Skills: AWS Certifications such as AWS Certified Machine Learning – Specialty or AWS Certified Solutions Architect. Familiarity with data privacy laws and frameworks (GDPR, HIPAA). Experience with AI governance and ethical AI frameworks. Expertise in cost optimization and performance tuning for AI on the cloud. Exposure to LangChain , LLMs , Kubeflow , or GCP-based AI services .

Posted 2 months ago

Apply

5 - 8 years

6 - 10 Lacs

Bengaluru

Work from Office

About The Role Role Purpose The purpose of this role is to design, test and maintain software programs for operating systems or applications which needs to be deployed at a client end and ensure its meet 100% quality assurance parameters B?ig Data Developer - Spark,Scala,Pyspark Big Data Developer - Spark, Scala, Pyspark Coding & scripting Years of Experience5 to 12 years LocationBangalore Notice Period0 to 30 days Key Skills: - Proficient in Spark,Scala,Pyspark coding & scripting - Fluent in big data engineering development using the Hadoop/Spark ecosystem - Hands-on experience in Big Data - Good Knowledge of Hadoop Eco System - Knowledge of cloud architecture AWS - Data ingestion and integration into the Data Lake using the Hadoop ecosystem tools such as Sqoop, Spark, Impala, Hive, Oozie, Airflow etc. - Candidates should be fluent in the Python / Scala language - Strong communication skills ? 2. Perform coding and ensure optimal software/ module development Determine operational feasibility by evaluating analysis, problem definition, requirements, software development and proposed software Develop and automate processes for software validation by setting up and designing test cases/scenarios/usage cases, and executing these cases Modifying software to fix errors, adapt it to new hardware, improve its performance, or upgrade interfaces. Analyzing information to recommend and plan the installation of new systems or modifications of an existing system Ensuring that code is error free or has no bugs and test failure Preparing reports on programming project specifications, activities and status Ensure all the codes are raised as per the norm defined for project / program / account with clear description and replication patterns Compile timely, comprehensive and accurate documentation and reports as requested Coordinating with the team on daily project status and progress and documenting it Providing feedback on usability and serviceability, trace the result to quality risk and report it to concerned stakeholders ? 3. Status Reporting and Customer Focus on an ongoing basis with respect to project and its execution Capturing all the requirements and clarifications from the client for better quality work Taking feedback on the regular basis to ensure smooth and on time delivery Participating in continuing education and training to remain current on best practices, learn new programming languages, and better assist other team members. Consulting with engineering staff to evaluate software-hardware interfaces and develop specifications and performance requirements Document and demonstrate solutions by developing documentation, flowcharts, layouts, diagrams, charts, code comments and clear code Documenting very necessary details and reports in a formal way for proper understanding of software from client proposal to implementation Ensure good quality of interaction with customer w.r.t. e-mail content, fault report tracking, voice calls, business etiquette etc Timely Response to customer requests and no instances of complaints either internally or externally ? Deliver No. Performance Parameter Measure 1. Continuous Integration, Deployment & Monitoring of Software 100% error free on boarding & implementation, throughput %, Adherence to the schedule/ release plan 2. Quality & CSAT On-Time Delivery, Manage software, Troubleshoot queries, Customer experience, completion of assigned certifications for skill upgradation 3. MIS & Reporting 100% on time MIS & report generation Mandatory Skills: Python for Insights. Experience5-8 Years. Reinvent your world. We are building a modern Wipro. We are an end-to-end digital transformation partner with the boldest ambitions. To realize them, we need people inspired by reinvention. Of yourself, your career, and your skills. We want to see the constant evolution of our business and our industry. It has always been in our DNA - as the world around us changes, so do we. Join a business powered by purpose and a place that empowers you to design your own reinvention. Come to Wipro. Realize your ambitions. Applications from people with disabilities are explicitly welcome.

Posted 2 months ago

Apply

3 - 5 years

6 - 10 Lacs

Bengaluru

Work from Office

About The Role JD: Qualification : 6+ years of enterprise system logging and monitoring tools experience, with a desired 5+ years in a relevant critical infrastructure of Enterprise Splunk and Elasticsearch 5+ yrs of working experience as Splunk Administrator with Cluster Building, Data Ingestion Management, User Role Management Search Configuration and Optimization. Strong knowledge on opensource logging and monitoring tools. Experience with containers logging and monitoring solutions. Experience with Linux operating system management and administration Familiarity with LAN/WAN technologies and clear understanding of basic network concepts / services Strong understanding of multi-tier application architectures and application runtime environments Monitoring the health and performance of the Splunk environment and troubleshooting any issues that arise. Worked in 24/7 on call environment. Knowledge of Python and other scripting languages and infrastructure automation technologies such as Ansible is desired Splunk Admin Certified is a plus Devops+AWS,Grafana,Promethus Should be willing to work in shifts and during weekend The selected candidate will be responsible for: Splunk administration support, including operation and maintenance of the log aggregation and Security Information and Event Management (SIEM) platform. Perform systems analysis, modify, and update systems and related data ingestion parameters based on results of analysis, deploy applications and tools, perform testing of deployed applications and tools, and communicate updates to the customer. Establish and maintain configuration and technical support, assist in the technical design process, and provide guidance/direction to customer on how to best get value from Splunk products. Maintain, upgrade, and troubleshoot SPLUNK servers, clusters, and management systems. Install, upgrade, and maintain required SPLUNK applications and add-ons. Provide performance and license tuning for systems and troubleshoot SPLUNK components across multiple network environments. Provide solution engineering support to ensure systems and components meet current and future standards. Develop, create, deploy, and manage custom SPLUNK monitors, alerts, and dashboards. Monitor SPLUNK for cluster status, health status, and other issues, and resolve as needed. Manage patching and updates of Splunk hosts and/or Splunk application software. Monitor and audit configurations and participate in the Change Management process to ensure that unauthorized changes do not occur. Build and integrate contextual data into Exp6-10yrs Location - PAN INDIA. CBR - 150K ? ? ? ? Mandatory Skills: Splunk AIOPS. Experience3-5 Years. Reinvent your world. We are building a modern Wipro. We are an end-to-end digital transformation partner with the boldest ambitions. To realize them, we need people inspired by reinvention. Of yourself, your career, and your skills. We want to see the constant evolution of our business and our industry. It has always been in our DNA - as the world around us changes, so do we. Join a business powered by purpose and a place that empowers you to design your own reinvention. Come to Wipro. Realize your ambitions. Applications from people with disabilities are explicitly welcome.

Posted 2 months ago

Apply

3 - 5 years

9 - 14 Lacs

Bengaluru

Work from Office

About PhonePe Group: PhonePe is Indias leading digital payments company with 50 crore (500 Million) registered users and 3.7 crore (37 Million) merchants covering over 99% of the postal codes across India. On the back of its leadership in digital payments, PhonePe has expanded into financial services (Insurance, Mutual Funds, Stock Broking, and Lending) as well as adjacent tech-enabled businesses such as Pincode for hyperlocal shopping and Indus App Store which is India's first localized App Store. The PhonePe Group is a portfolio of businesses aligned with the company's vision to offer every Indian an equal opportunity to accelerate their progress by unlocking the flow of money and access to services. Culture At PhonePe, we take extra care to make sure you give your best at work, Everyday! And creating the right environment for you is just one of the things we do. We empower people and trust them to do the right thing. Here, you own your work from start to finish, right from day one. Being enthusiastic about tech is a big part of being at PhonePe. If you like building technology that impacts millions, ideating with some of the best minds in the country and executing on your dreams with purpose and speed, join us! About The Role We are seeking a motivated and skilled Data Scientist with 3 years of experience to join our dynamic team. The ideal candidate will have a strong foundation in machine learning, with a focus on implementing algorithms at scale. Additionally, knowledge of computer vision and natural language processing will be ideal Key Responsibilities: - Develop and implement machine learning models, offline batch models as well as real time online and edge compute models - Analyze complex datasets and extract meaningful insights to drive business decisions - Collaborate with cross-functional teams to identify and solve business problems using data-driven approaches - Communicate findings and recommendations to stakeholders effectively Required Qualifications: - Bachelor's or Master's degree in Computer Science, Data Science, Statistics, or a related field - 3+ years of experience in a Data Scientist role - Strong proficiency in Python and SQL - Solid understanding of machine learning algorithms and statistical modeling techniques - Knowledge of Natural Language Processing (NLP) and Computer Vision (CV) concepts and algorithms - Hands-on experience implementing and deploying machine learning algorithms - Experience with data visualization tools and techniques - Strong analytical and problem-solving skills - Excellent communication skills, both written and verbal Preferred Qualifications: - Experience with PySpark and other big data processing frameworks - Knowledge of deep learning frameworks (e.g., TensorFlow, PyTorch) Technical Skills: - Programming LanguagesPython (required), SQL (required), Java (basic knowledge preferred)- Machine LearningStrong foundation in traditional ML algorithms, and a working knowledge of NLP and Computer Vision- Big DataDeep knowledge of PySpark- Data Storage and RetrievalFamiliarity with databases/mlflow preferred- MathematicsStrong background in statistics, linear algebra, and probability theory- Version ControlGit Soft Skills: - Excellent communication skills to facilitate interactions with stakeholders- Ability to explain complex technical concepts to non-technical audiences- Strong problem-solving and analytical thinking- Self-motivated and able to work independently as well as in a team environment- Curiosity and eagerness to learn new technologies and methodologiesWe're looking for a motivated individual who is passionate about data science and eager to take on challenging tasks. If you thrive in a fast-paced environment and are excited about leveraging cutting-edge technologies in machine learning to solve real-world problems, we encourage you to apply! PhonePe Full Time Employee Benefits (Not applicable for Intern or Contract Roles) Insurance Benefits - Medical Insurance, Critical Illness Insurance, Accidental Insurance, Life Insurance Wellness Program - Employee Assistance Program, Onsite Medical Center, Emergency Support System Parental Support - Maternity Benefit, Paternity Benefit Program, Adoption Assistance Program, Day-care Support Program Mobility Benefits - Relocation benefits, Transfer Support Policy, Travel Policy Retirement Benefits - Employee PF Contribution, Flexible PF Contribution, Gratuity, NPS, Leave Encashment Other Benefits - Higher Education Assistance, Car Lease, Salary Advance Policy Working at PhonePe is a rewarding experience! Great people, a work environment that thrives on creativity, the opportunity to take on roles beyond a defined job description are just some of the reasons you should work with us. Read more about PhonePe .

Posted 2 months ago

Apply

6 - 8 years

12 - 16 Lacs

Hyderabad

Remote

Job Title: Data Engineer Job Summary: Are you passionate about building scalable data pipelines, optimizing ETL processes, and designing efficient data models? We are looking for a Databricks Data Engineer to join our team and play a key role in managing and transforming data in Azure cloud environments. In this role, you will work with Azure Data Factory (ADF), Databricks, Python, and SQL to develop robust data ingestion and transformation workflows. Youll also be responsible for integrating, ,optimizing performance, and ensuring data quality & governance. If you have strong experience in big data processing, distributed computing (Spark), and data modeling, wed love to hear from you! Key Responsibilities: 1. Develop & Optimize ETL Pipelines : Build robust and scalable data pipelines using ADF, Databricks, and Python for data ingestion, transformation, and loading. 2. Data Modeling & Systematic Layer Modeling : Design logical, physical, and systematic data models for structured and unstructured data. 3. Database Management : Develop and optimize SQL queries, stored procedures, and indexing strategies to enhance performance. 4. Big Data Processi ng: Work with Azure Databricks for distributed computing, Spark for large-scale processing, and Delta Lake for optimized storage. 5. Data Quality & Governance : Implement data validation, lineage tracking, and security measures for high-quality, compliant data. 6. Collaboration : Work closely with business analysts, data scientists, and DevOps teams to ensure data availability and usability. 7. Testing and Debugging : Write unit tests and perform debugging to ensure the Implementation is robust and error-free. Conduct performance optimization and security audits. Required Skills and Qualifications: Azure Cloud Expertise: Strong experience in Azure Data Factory (ADF), Databricks, and Azure Synapse. Programming: Proficiency in Python for data processing, automation, and scripting. SQL & Database Skills: Advanced knowledge of SQL, T-SQL, or PL/SQL for data manipulation. Data Modeling: Hands-on experience in dimensional modeling, systematic layer modeling, and entity-relationship modeling. Big Data Frameworks: Strong understanding of Apache Spark, Delta Lake, and distributed computing. Performance Optimization: Expertise in query optimization, indexing, and performance tuning. Data Governance & Security: Knowledge of RBAC, encryption, and data privacy standards. Preferred Qualifications: Experience with CI/CD for data pipelines using Azure DevOps. Knowledge of Kafka/Event Hub for real-time data processing. Experience with Power BI/Tableau for data visualization (not mandatory but a plus).

Posted 2 months ago

Apply

10 - 16 years

35 - 100 Lacs

Mumbai

Work from Office

Job Summary As an ATS (Account Technology Specialist) in NetApp’s Sales function, you will utilize strong customer handling and technical competencies to set objectives and execute plans for winning sales campaigns. This challenging and high-visibility position provides a huge opportunity to grow in your career and cover the largest account base in the region. You develop long-term strategies and shorter-term plans to meet aggressive performance goals with the channel partners and internal stakeholders, including the Client Executive and the District Manager. You must be extremely results driven, customer focused, tech savvy, and skilled at building internal relationships and external partnerships. Essential Functions Provide technical oversights to channel partners and customers within the territory to drive all pertinent issues sales campaigns and goal attainment. Work towards meeting target along with the client executive for the territory by devising short terms goals and long term strategies in the assigned accounts. Evangelise NetApp’s proposition in the assigned territory. Drive technical closures for any sales campaign positioning NetApp as the most viable solution for prospective customers. Job Requirements Excellent verbal and written communications skills, including presentation skills. Proven experience in presales, designing and proposing technical solutions. Excellent presentation, relationship building and negotiating skills. Ability to work collaboratively with functional peers across functions including Marketing, Sales, Sales Operations, Customer Support, and Product Development. Strong understanding of Data storage, Data Protection, Disaster recovery and competitive offerings in the marketplace. Understanding of Cloud technologies is highly desirable. Ability to convey and analyze information clearly as needed to help customer make buying decisions. An excellent understanding how technology products and solutions solve business problems. The ability to hold key technical decision maker and CXO relationships within major accounts in the territory assigned. Education At least 15 years of experience in Technical presales. A Bachelor of Sciences Degree in Engineering, Computer Science; or related field is preferred; a Graduate Degree is mandatory.

Posted 2 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies