Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
8.0 - 10.0 years
9 - 13 Lacs
Bengaluru
Work from Office
Role Responsibilities : - Design and implement data pipelines using MS Fabric. - Develop data models to support business intelligence and analytics. - Manage and optimize ETL processes for data extraction, transformation, and loading. - Collaborate with cross-functional teams to gather and define data requirements. - Ensure data quality and integrity in all data processes. - Implement best practices for data management, storage, and processing. - Conduct performance tuning for data storage and retrieval for enhanced efficiency. - Generate and maintain documentation for data architecture and data flow. - Participate in troubleshooting data-related issues and implement solutions. - Monitor and optimize cloud-based solutions for scalability and resource efficiency. - Evaluate emerging technologies and tools for potential incorporation in projects. - Assist in designing data governance frameworks and policies. - Provide technical guidance and support to junior data engineers. - Participate in code reviews and ensure adherence to coding standards. - Stay updated with industry trends and best practices in data engineering. Qualifications : - 8+ years of experience in data engineering roles. - Strong expertise in MS Fabric and related technologies. - Proficiency in SQL and relational database management systems. - Experience with data warehousing solutions and data modeling. - Hands-on experience in ETL tools and processes. - Knowledge of cloud computing platforms (Azure, AWS, GCP). - Familiarity with Python or similar programming languages. - Ability to communicate complex concepts clearly to non-technical stakeholders. - Experience in implementing data quality measures and data governance. - Strong problem-solving skills and attention to detail. - Ability to work independently in a remote environment. - Experience with data visualization tools is a plus. - Excellent analytical and organizational skills. - Bachelor's degree in Computer Science, Engineering, or related field. - Experience in Agile methodologies and project management.
Posted 1 day ago
6.0 - 8.0 years
9 - 13 Lacs
Mumbai
Work from Office
Job Title : Sr.Data Engineer Ontology & Knowledge Graph Specialist. Department : Platform Engineering. Role Description : This is a remote contract role for a Data Engineer Ontology_5+yrs at Zorba AI. We are seeking a highly skilled Data Engineer with expertise in ontology development and knowledge graph implementation. This role will be pivotal in shaping our data infrastructure and ensuring the accurate representation and integration of complex data sets. You will leverage industry best practices, including the Basic Formal Ontology (BFO) and Common Core Ontologies (CCO), to design, develop, and maintain ontologies, semantic and syntactic data models, and knowledge graphs on the Databricks Data Intelligence Platform that drive data-driven decision-making and innovation within the company. Responsibilities : Ontology Development : - Design and implement ontologies based on BFO and CCO principles, ensuring alignment with business requirements and industry standards. - Collaborate with domain experts to capture and formalize domain knowledge into ontological structures. - Develop and maintain comprehensive ontologies to model various business entities, relationships, and processes. Data Modeling : - Design and implement semantic and syntactic data models that adhere to ontological principles. - Create data models that are scalable, flexible, and adaptable to changing business needs. - Integrate data models with existing data infrastructure and applications. Knowledge Graph Implementation : - Design and build knowledge graphs based on ontologies and data models. - Develop algorithms and tools for knowledge graph population, enrichment, and maintenance. - Utilize knowledge graphs to enable advanced analytics, search, and recommendation systems. Data Quality and Governance :. - Ensure the quality, accuracy, and consistency of ontologies, data models, and knowledge graphs. - Define and implement data governance processes and standards for ontology development and maintenance. Collaboration and Communication : - Work closely with data scientists, software engineers, and business stakeholders to understand their data requirements and provide tailored solutions. - Communicate complex technical concepts clearly and effectively to diverse audiences. Qualifications : Education : Bachelor's or master's degree in computer science, Data Science, or a related field. Experience : 5+ years of experience in data engineering or a related role. - Proven experience in ontology development using BFO and CCO or similar ontological frameworks. - Strong knowledge of semantic web technologies, including RDF, OWL, SPARQL, and SHACL. - Proficiency in Python, SQL, and other programming languages used for data engineering. - Experience with graph databases (e., TigerGraph, JanusGraph) and triple stores (e., GraphDB, Stardog) is a plus. Desired Skills : - Familiarity with machine learning and natural language processing techniques. - Experience with cloud-based data platforms (e., AWS, Azure, GCP). - Experience with Databricks technologies including Spark, Delta Lake, Iceberg, Unity Catalog, UniForm, and Photon. - Strong problem-solving and analytical skills. - Excellent communication and interpersonal skills.
Posted 1 day ago
5.0 - 7.0 years
10 - 14 Lacs
Chennai
Work from Office
Job Title : Sr. Data Engineer Ontology & Knowledge Graph Specialist Department : Platform Engineering Summary : We are seeking a highly skilled Data Engineer with expertise in ontology development and knowledge graph implementation. This role will be pivotal in shaping our data infrastructure and ensuring the accurate representation and integration of complex data sets. You will leverage industry best practices, including the Basic Formal Ontology (BFO) and Common Core Ontologies (CCO), to design, develop, and maintain ontologies, semantic and syntactic data models, and knowledge graphs on the Databricks Data Intelligence Platform that drive data-driven decision-making and innovation within the company. Responsibilities : Ontology Development : - Design and implement ontologies based on BFO and CCO principles, ensuring alignment with business requirements and industry standards. - Collaborate with domain experts to capture and formalize domain knowledge into ontological structures. - Develop and maintain comprehensive ontologies to model various business entities, relationships, and processes. Data Modeling : - Design and implement semantic and syntactic data models that adhere to ontological principles. - Create data models that are scalable, flexible, and adaptable to changing business needs. - Integrate data models with existing data infrastructure and applications. Knowledge Graph Implementation : - Design and build knowledge graphs based on ontologies and data models. - Develop algorithms and tools for knowledge graph population, enrichment, and maintenance. - Utilize knowledge graphs to enable advanced analytics, search, and recommendation systems. Data Quality And Governance : - Ensure the quality, accuracy, and consistency of ontologies, data models, and knowledge graphs. - Define and implement data governance processes and standards for ontology development and maintenance. Collaboration And Communication : - Work closely with data scientists, software engineers, and business stakeholders to understand their data requirements and provide tailored solutions. - Communicate complex technical concepts clearly and effectively to diverse audiences. Qualifications : Education : - Bachelor's or Master's degree in Computer Science, Data Science, or a related field. Experience : - 5+ years of experience in data engineering or a related role. - Proven experience in ontology development using BFO and CCO or similar ontological frameworks. - Strong knowledge of semantic web technologies, including RDF, OWL, SPARQL, and SHACL. - Proficiency in Python, SQL, and other programming languages used for data engineering. - Experience with graph databases (e.g., TigerGraph, JanusGraph) and triple stores (e.g., GraphDB, Stardog) is a plus. Desired Skills : - Familiarity with machine learning and natural language processing techniques. - Experience with cloud-based data platforms (e.g., AWS, Azure, GCP). - Experience with Databricks technologies including Spark, Delta Lake, Iceberg, Unity Catalog, UniForm, and Photon. - Strong problem-solving and analytical skills. - Excellent communication and interpersonal skills.
Posted 1 day ago
6.0 - 8.0 years
22 - 25 Lacs
Bengaluru
Work from Office
We are looking for energetic, self-motivated and exceptional Data engineer to work on extraordinary enterprise products based on AI and Big Data engineering leveraging AWS/Databricks tech stack. He/she will work with star team of Architects, Data Scientists/AI Specialists, Data Engineers and Integration. Skills and Qualifications: 5+ years of experience in DWH/ETL Domain; Databricks/AWS tech stack 2+ years of experience in building data pipelines with Databricks/ PySpark /SQL Experience in writing and interpreting SQL queries, designing data models and data standards. Experience in SQL Server databases, Oracle and/or cloud databases. Experience in data warehousing and data mart, Star and Snowflake model. Experience in loading data into database from databases and files. Experience in analyzing and drawing design conclusions from data profiling results. Understanding business process and relationship of systems and applications. Must be comfortable conversing with the end-users. Must have ability to manage multiple projects/clients simultaneously. Excellent analytical, verbal and communication skills. Role and Responsibilities: Work with business stakeholders and build data solutions to address analytical & reporting requirements. Work with application developers and business analysts to implement and optimise Databricks/AWS-based implementations meeting data requirements. Design, develop, and optimize data pipelines using Databricks (Delta Lake, Spark SQL, PySpark), AWS Glue, and Apache Airflow Implement and manage ETL workflows using Databricks notebooks, PySpark and AWS Glue for efficient data transformation Develop/ optimize SQL scripts, queries, views, and stored procedures to enhance data models and improve query performance on managed databases. Conduct root cause analysis and resolve production problems and data issues. Create and maintain up to date documentation of the data model, data flow and field level mappings. Provide support for production problems and daily batch processing. Provide ongoing maintenance and optimization of database schemas, data lake structures (Delta Tables, Parquet), and views to ensure data integrity and performance. Immediate Joiners.
Posted 1 day ago
4.0 - 8.0 years
14 - 20 Lacs
Mohali
Work from Office
Were looking for a fast-moving, detail-oriented Data Engineer with deep experience in AWS data services, ETL processes, and reporting tools like QuickSight, Tableau, and Power BI. Youll play a key role in building and maintaining the data infrastructure that powers reporting, analytics, and strategic decision-making across the organization. Key Responsibilities: Design, develop, and maintain scalable ETL pipelines to process and transform data from multiple sources. Build and optimize data models, data lakes, and data warehouses using AWS services such as: AWS Glue, Athena, Redshift, S3, Lambda, and QuickSight. Collaborate with business teams and analysts to deliver self-service reporting and dashboards via QuickSight. Ensure data integrity, security, and performance across all reporting platforms. Support cross-functional teams with ad hoc data analysis and the development of custom reporting solutions. Monitor data pipelines and troubleshooting issues as they arise. Preferred Qualifications: 4+ years of experience as a Data Engineer or in a similar role. Strong hands-on experience with AWS data ecosystem, particularly: AWS Glue (ETL), Redshift, S3, Athena, Lambda, QuickSight. Proficiency in SQL and scripting languages such as Python. Experience working with QuickSight, Tableau and Power BI in a production environment. Strong understanding of data architecture, data warehousing, and dimensional modeling. Familiarity with data governance, quality checks, and best practices for data privacy and compliance. Comfortable working in a fast-paced, agile environment with shifting priorities. Excellent communication and collaboration skills. Nice to Have: Experience with DevOps for data: Terraform, CloudFormation, or CI/CD pipelines for data infrastructure. Background in financial services, SaaS, or a data-heavy industry. Why Join Us? Make a direct impact on high-visibility analytics and reporting projects. Work with modern cloud-native data tools and a collaborative, high-performance team. Flexible work environment with opportunities for growth and leadership. Shift timings - Swing shift 2:30 PM to 11:30 PM, Cab and Food will be provided.
Posted 1 day ago
10.0 - 20.0 years
16 - 20 Lacs
Hyderabad, Chennai, Bengaluru
Work from Office
Mode of work : Hybrid or Remote (Hyderabad or Pune candidates preferred for Hybrid working mode or any candidates from Pan India for remote is also fine) Job Description for AI Architect : Mandatory Skills : - SQL - Data science - AI/ML - GenAI - Data Engineering Optional Skills : Data Modelling Role & Responsibilities : - Strong Python programming expertise - Hands-on experience with AI-related Python tasks (e.g., data analysis, modeling) - Experience with analyzing discrete datasets - Proficiency in working with pre-trained AI models - Problem-solving ability in real-world AI use cases Python Proficiency : - Speed and accuracy in writing code - Understanding of syntax and debugging without external help - Ability to solve complex problems with minimal Google reliance. AI & Data Analysis Skills : - Hands-on expertise in data manipulation and analysis - Understanding of AI modeling and implementation Problem-Solving & Debugging : - Ability to analyze error logs and fix issues quickly - Logical approach to troubleshooting without relying on trivial searches Practical Application & Use Cases : - Ability to apply AI/ML techniques to real-world datasets - Experience with implementing pre-trained models effectively Interview Task : - Given a dataset , candidates must perform multivariate analysis. - Implement modeling using pre-trained models. - Debugging ability without excessive reliance on Google. - Live coding assessment with an open-book approach (Google allowed but should not rely on basic algorithm searches). Role-Specific Expertise : - If applying for AI Architect, the assessment will focus on AI-related tasks. - If applying for Data Engineer, tasks will be aligned with data engineering requirements Location-Hyderabad/Bangalore/Chennai/Pune/Noida/Delhi/Anywhere in india
Posted 1 day ago
7.0 - 9.0 years
15 - 30 Lacs
Thiruvananthapuram
Work from Office
Job Title: Senior Data Associate - Cloud Data Engineering Experience: 7+ Years Employment Type: Full-Time Industry: Information Technology / Data Engineering / Cloud Platforms Job Summary: We are seeking a highly skilled and experienced Senior Data Associate to join our data engineering team. The ideal candidate will have a strong background in cloud data platforms, big data processing, and enterprise data systems, with hands-on experience across both AWS and Azure ecosystems. This role involves building and optimizing data pipelines, managing large-scale data lakes and warehouses, and enabling advanced analytics and reporting. Key Responsibilities: Design, develop, and maintain scalable data pipelines using AWS Glue, PySpark, and Azure Data Factory. Work with AWS Redshift, Athena, Azure Synapse, and Databricks to support data warehousing and analytics solutions. Integrate and manage data across MongoDB, Oracle, and cloud-native storage like Azure Data Lake and S3. Collaborate with data scientists, analysts, and business stakeholders to understand data requirements and deliver high-quality datasets. Implement data quality checks, monitoring, and governance practices. Optimize data workflows for performance, scalability, and cost-efficiency. Support data migration and modernization initiatives across cloud platforms. Document data flows, architecture, and technical specifications. Required Skills & Qualifications: 7+ years of experience in data engineering, data integration, or related roles. Strong hands-on experience with: AWS Redshift, Athena, Glue, S3 Azure Data Lake, Synapse Analytics, Databricks PySpark for distributed data processing MongoDB and Oracle databases Proficiency in SQL, Python, and data modeling. Experience with ETL/ELT design and implementation. Familiarity with data governance, security, and compliance standards. Strong problem-solving and communication skills. Preferred Qualifications: Certifications in AWS (e.g., Data Analytics Specialty) or Azure (e.g., Azure Data Engineer Associate). Experience with CI/CD pipelines and DevOps for data workflows. Knowledge of data cataloging tools (e.g., AWS Glue Data Catalog, Azure Purview). Exposure to real-time data processing and streaming technologies. Required Skills Azure,AWS REDSHIFT,Athena,Azure Data Lake
Posted 1 day ago
12.0 - 15.0 years
5 - 5 Lacs
Thiruvananthapuram
Work from Office
Senior Data Architect - Big Data & Cloud Solutions Experience: 10+ Years Industry: Information Technology / Data Engineering / Cloud Computing Job Summary: We are seeking a highly experienced and visionary Data Architect to lead the design and implementation of scalable, high-performance data solutions. The ideal candidate will have deep expertise in Apache Kafka, Apache Spark, AWS Glue, PySpark, and cloud-native architectures, with a strong background in solution architecture and enterprise data strategy. Key Responsibilities: Design and implement end-to-end data architecture solutions on AWS using Glue, S3, Redshift, and other services. Architect and optimize real-time data pipelines using Apache Kafka and Spark Streaming. Lead the development of ETL/ELT workflows using PySpark and AWS Glue. Collaborate with stakeholders to define data strategies, governance, and best practices. Ensure data quality, security, and compliance across all data platforms. Provide technical leadership and mentorship to data engineers and developers. Evaluate and recommend new tools and technologies to improve data infrastructure. Translate business requirements into scalable and maintainable data solutions. Required Skills & Qualifications: 10+ years of experience in data engineering, architecture, or related roles. Strong hands-on experience with: Apache Kafka (event streaming, topic design, schema registry) Apache Spark (batch and streaming) AWS Glue, S3, Redshift, Lambda, CloudFormation/Terraform PySpark for large-scale data processing Proven experience in solution architecture and designing cloud-native data platforms. Deep understanding of data modeling, data lakes, and data warehousing concepts. Strong programming skills in Python and SQL. Experience with CI/CD pipelines and DevOps practices for data workflows. Excellent communication and stakeholder management skills. Preferred Qualifications: AWS Certified Solutions Architect or Big Data Specialty certification. Experience with data governance tools and frameworks. Familiarity with containerization (Docker, Kubernetes) and orchestration tools (Airflow, Step Functions). Exposure to machine learning pipelines and MLOps is a plus. Required Skills Apache,Pyspark,Aws Cloud,Kafka
Posted 1 day ago
10.0 - 15.0 years
14 - 19 Lacs
Chennai
Work from Office
Job Summary: We are seeking ahighly motivated and experienced Delivery Partner to lead and oversee thesuccessful delivery of analytics projects. The ideal candidate will possess astrong background in data analytics, excellent project management skills, and aproven ability to collaborate effectively with cross-functional teams. You willbe responsible for ensuring projects are delivered on time, within budget, andto the required quality standards. This role requires a detail-orientedindividual with excellent communication and problem-solving skills, and apassion for leveraging data to drive business value. Responsibilities: Project Leadership: Leadand manage the full lifecycle of analytics projects, from initiation todeployment. Stakeholder Management: Collaboratewith business stakeholders, product owners, and engineering teams to defineproject scope, requirements, and deliverables. Technical Guidance: Providetechnical leadership and guidance to the development team, ensuring adherenceto architectural standards and best practices. Data Architecture & Design: Contributeto data architecture design, data modeling (logical and physical), and thedevelopment of data integration strategies. Risk Management: Identify,assess, and mitigate project risks and issues, escalating as necessary. Resource Management: Manageproject resources, including developers, data engineers, and business analysts,ensuring optimal allocation and utilization. Quality Assurance: Implementand maintain quality assurance processes to ensure data accuracy, integrity,and reliability. Communication & Reporting: Provideregular project status updates to stakeholders, including progress, risks, andissues. Process Improvement: Identifyand implement process improvements to enhance project delivery efficiency andeffectiveness. Vendor Management: Managerelationships with external vendors and partners, ensuring deliverables meetexpectations and contractual obligations. Qualifications: Bachelors degree in computer science, Engineering, or arelated field. 15+ years of experience in data engineering, datawarehousing, business intelligence, or a related area. Proven experience in managing complex data-related projectsusing Agile methodologies. Strong understanding of data warehousing concepts, datamodeling techniques, and ETL processes. Hands-on experience with cloud platforms such as AWS,Azure, or Snowflake. Proficiency in SQL and experience with databases such asTeradata, Oracle, and SQL Server. Experience with data visualization tools such as Tableau orPower BI. Excellent communication, interpersonal, and leadershipskills. Ability to work effectively in a matrix environment. Strong problem-solving and analytical skills. Experience with project management tools such as JIRA, MSProject, and Confluence
Posted 1 day ago
3.0 - 6.0 years
5 - 8 Lacs
Bengaluru
Work from Office
Duration : 6 Months Notice Period : within 15 days or immediate joiner Experience : 3- 6 Years About The Role : As a Data Engineer for the Data Science team, you will play a pivotal role in enriching and maintaining the organization's central repository of datasets. This repository serves as the backbone for advanced data analytics and machine learning applications, enabling actionable insights from financial and market data. You will work closely with cross-functional teams to design and implement robust ETL pipelines that automate data updates and ensure accessibility across the organization. This is a critical role requiring technical expertise in building scalable data pipelines, ensuring data quality, and supporting data analytics and reporting infrastructure for business growth. Note : Must be ready for face-to-face interview in Bangalore (last round). Should be working with Azure as cloud technology. Key Responsibilities : ETL Development : - Design, develop, and maintain efficient ETL processes for handling multi-scale datasets. - Implement and optimize data transformation and validation processes to ensure data accuracy and consistency. - Collaborate with cross-functional teams to gather data requirements and translate business logic into ETL workflows. Data Pipeline Architecture : - Architect, build, and maintain scalable and high-performance data pipelines to enable seamless data flow. - Evaluate and implement modern technologies to enhance the efficiency and reliability of data pipelines. - Build pipelines for extracting data via web scraping to source sector-specific datasets on an ad hoc basis. Data Modeling : - Design and implement data models to support analytics and reporting needs across teams. - Optimize database structures to enhance performance and scalability. Data Quality And Governance : - Develop and implement data quality checks and governance processes to ensure data integrity. - Collaborate with stakeholders to define and enforce data quality standards across the and Communication - Maintain detailed documentation of ETL processes, data models, and other key workflows. - Effectively communicate complex technical concepts to non-technical stakeholders and business Collaboration - Work closely with the Quant team and developers to design and optimize data pipelines. - Collaborate with external stakeholders to understand business requirements and translate them into technical solutions. Essential Requirements Basic Qualifications : - Bachelor's degree in Computer Science, Information Technology, or a related field. - Familiarity with big data technologies like Hadoop, Spark, and Kafka. - Experience with data modeling tools and techniques. - Excellent problem-solving, analytical, and communication skills. - Proven experience as a Data Engineer with expertise in ETL techniques (minimum years). - 3-6 years of strong programming experience in languages such as Python, Java, or Scala - Hands-on experience in web scraping to extract and transform data from publicly available web sources. - Proficiency with cloud-based data platforms such as AWS, Azure, or GCP. - Strong knowledge of SQL and experience with relational and non-relational databases. - Deep understanding of data warehousing concepts and Qualifications : - Master's degree in Computer Science or Data Science. - Knowledge of data streaming and real-time processing frameworks. - Familiarity with data governance and security best practices.
Posted 1 day ago
4.0 - 6.0 years
6 - 11 Lacs
Noida
Work from Office
Responsibilities : - Collaborate with the sales team to understand customer challenges and business objectives and propose solutions, POC etc. - Develop and deliver impactful technical presentations and demos showcasing the capabilities of GCP Data and AI , GenAI Solutions. - Conduct technical proof-of-concepts (POCs) to validate the feasibility and value proposition of GCP solutions. - Collaborate with technical specialists and solution architects from COE Team to design and configure tailored cloud solutions. - Manage and qualify sales opportunities, working closely with the sales team to progress deals through the sales funnel. - Stay up to date on the latest GCP offerings, trends, and best practices. Experience : - Design and implement a comprehensive strategy for migrating and modernizing existing relational on-premise databases to scalable and cost-effective solution on Google Cloud Platform ( GCP). - Design and Architect the solutions for DWH Modernization and experience with building data pipelines in GCP. - Strong Experience in BI reporting tools ( Looker, PowerBI and Tableau). - In-depth knowledge of Google Cloud Platform (GCP) services, particularly Cloud SQL, Postgres, Alloy DB, BigQuery, Looker Vertex AI and Gemini (GenAI). - Strong knowledge and experience in providing the solution to process massive datasets in real time and batch process using cloud native/open source Orchestration techniques. - Build and maintain data pipelines using Cloud Dataflow to orchestrate real-time and batch data processing for streaming and historical data. - Strong knowledge and experience in best practices for data governance, security, and compliance. - Excellent Communication and Presentation Skills with ability to tailor technical information as per customer needs. - Strong analytical and problem-solving skills. - Ability to work independently and as part of a team.
Posted 1 day ago
5.0 - 6.0 years
4 - 8 Lacs
Bengaluru
Work from Office
- Architect and optimize distributed data processing pipelines leveraging PySpark for high-throughput, low-latency workloads. - Utilize the Apache big data stack (Hadoop, Hive, HDFS) to orchestrate ingestion, transformation, and governance of massive datasets. - Engineer fault-tolerant, production-grade ETL frameworks ensuring seamless scalability and system resilience. - Interface cross-functionally with Data Scientists and domain experts to translate analytical needs into performant data solutions. - Enforce rigorous data quality controls and lineage mechanisms to uphold auditability and regulatory compliance. - Contribute to core architectural design, implement clean and modular Python/Java code, and drive performance benchmarking at scale. Required Skills : - 5-7 years of experience. - Strong hands-on experience with PySpark for distributed data processing. - Deep understanding of Apache ecosystem (Hadoop, Hive, Spark, HDFS, etc.) - Solid grasp of data warehousing, ETL principles, and data modeling. - Experience working with large-scale datasets and performance optimization. - Familiarity with SQL and NoSQL databases. - Proficiency in Python and basic to intermediate knowledge of Java. - Experience in using version control tools like Git and CI/CD pipelines. Nice-to-Have Skills : - Working experience with Apache NiFi for data flow orchestration. - Experience in building real-time streaming data pipelines. - Knowledge of cloud platforms like AWS, Azure, or GCP. - Familiarity with containerization tools like Docker or orchestration tools like Kubernetes.
Posted 1 day ago
5.0 - 10.0 years
4 - 8 Lacs
Nagpur
Remote
Designation / Position : Data Engineer SAP BW/4HANA Work Experience Required : 5-10 years Job Location(s) : Remote Industry Type : IT / Software / SAP Services Functional Area : Data Engineering / SAP BW/4HANA / Automation Qualification : Bachelors or Masters Degree in Computer Science, Information Technology, or related field Responsibilities : - Design, develop, and maintain high-performance data solutions within the SAP BW/4HANA environment, ensuring data quality and integrity. - Utilize ABAP and AMDP to develop efficient data extraction, transformation, and loading (ETL) processes within SAP BW/4HANA. - Optimize existing data models, data flows, and query performance in SAP BW/4HANA. - Implement and manage data automation workflows using tools such as Automic or similar scheduling platforms. - Support and troubleshoot data pipelines, which may include integration with Hadoop-based systems and other data sources. - Collaborate effectively with international team members across different time zones, participating in meetings and sharing knowledge. - Actively participate in all phases of the project lifecycle, from requirements gathering and design to development, testing, and deployment, utilizing tools like Micro Focus ALM and MF Service Manager for project tracking and issue management. - Write clear and concise technical documentation for developed data solutions and processes. - Work with File Transfer Protocols (e.g., SFTP, FTP) to manage data exchange with various systems. - Utilize collaboration tools such as Jira and Confluence for task management, knowledge sharing, and documentation. - Ensure adherence to data governance policies and standards. - Proactively identify and resolve performance bottlenecks and data quality issues within the SAP BW/4HANA system. - Stay up-to-date with the latest advancements in SAP BW/4HANA, data engineering technologies, and automation tools. - Contribute to the continuous improvement of our data engineering processes and methodologies. Technical Skills : - SAP BW/4HANA : Extensive hands-on experience in designing, developing, and administering SAP BW/4HANA systems, including data modeling (LSA++, virtual data models), data extraction from various sources (SAP and non-SAP), transformations, and loading processes. - ABAP for BW/4HANA : Strong proficiency in ABAP programming, specifically for developing routines, transformations, and other custom logic within SAP BW/4HANA. - AMDP : Solid experience in developing and optimizing data transformations using ABAP Managed Database Procedures (AMDP) for performance enhancement. - Data Engineering Principles : Strong understanding of data warehousing concepts, data modeling techniques, ETL/ELT processes, and data quality principles. - Automation Tools : Hands-on experience with automation tools, preferably Automic, for scheduling and managing data workflows. - Hadoop (Beneficial) : Familiarity with Hadoop ecosystems and technologies (e.g., HDFS, Hive, Spark) and experience integrating with SAP BW/4HANA is a plus. - File Transfer Protocols : Experience working with various file transfer protocols (e.g., SFTP, FTP, HTTPS) for data exchange. - SQL : Strong SQL skills for data querying and analysis. - SAP BW Query Designer and BEx Analyzer (Beneficial) : Experience with SAP BW Query Designer and BEx Analyzer for creating and troubleshooting reports is a plus. - SAP HANA (Underlying Database) : Good understanding of SAP HANA as the underlying database for SAP BW/4HANA. Functional Skills : - Strong analytical and problem-solving skills with the ability to translate business requirements into technical data solutions. - Excellent communication skills, both written and verbal, with the ability to effectively communicate with technical and non-technical stakeholders in an international setting. - Proven ability to collaborate effectively within a geographically distributed team. - Experience working with project management tools like Micro Focus ALM and MF Service Manager for issue tracking and project lifecycle management. - Familiarity with collaboration tools such as Jira and Confluence. - Ability to work independently and manage tasks effectively in a remote environment. - Adaptability to different cultural norms and communication styles within a global team. Qualifications : - Bachelors or Masters Degree in Computer Science, Information Technology, Data Science, or a related field. - 5-10 years of professional experience as a Data Engineer with a focus on SAP BW/4HANA. - Proven track record of successful implementation and support of SAP BW/4HANA data solutions. - Strong understanding of data warehousing principles and best practices. - Excellent communication and collaboration skills in an international environment. Bonus Points : - SAP BW/4HANA certification. - Experience with other data warehousing technologies. - Knowledge of SAP Analytics Cloud (SAC) and its integration with SAP BW/4HANA. - Experience with agile development methodologies
Posted 1 day ago
5.0 - 10.0 years
4 - 8 Lacs
Nashik
Remote
Designation / Position : Data Engineer SAP BW/4HANA Work Experience Required : 5-10 years Job Location(s) : Remote Industry Type : IT / Software / SAP Services Functional Area : Data Engineering / SAP BW/4HANA / Automation Qualification : Bachelors or Masters Degree in Computer Science, Information Technology, or related field Responsibilities : - Design, develop, and maintain high-performance data solutions within the SAP BW/4HANA environment, ensuring data quality and integrity. - Utilize ABAP and AMDP to develop efficient data extraction, transformation, and loading (ETL) processes within SAP BW/4HANA. - Optimize existing data models, data flows, and query performance in SAP BW/4HANA. - Implement and manage data automation workflows using tools such as Automic or similar scheduling platforms. - Support and troubleshoot data pipelines, which may include integration with Hadoop-based systems and other data sources. - Collaborate effectively with international team members across different time zones, participating in meetings and sharing knowledge. - Actively participate in all phases of the project lifecycle, from requirements gathering and design to development, testing, and deployment, utilizing tools like Micro Focus ALM and MF Service Manager for project tracking and issue management. - Write clear and concise technical documentation for developed data solutions and processes. - Work with File Transfer Protocols (e.g., SFTP, FTP) to manage data exchange with various systems. - Utilize collaboration tools such as Jira and Confluence for task management, knowledge sharing, and documentation. - Ensure adherence to data governance policies and standards. - Proactively identify and resolve performance bottlenecks and data quality issues within the SAP BW/4HANA system. - Stay up-to-date with the latest advancements in SAP BW/4HANA, data engineering technologies, and automation tools. - Contribute to the continuous improvement of our data engineering processes and methodologies. Technical Skills : - SAP BW/4HANA : Extensive hands-on experience in designing, developing, and administering SAP BW/4HANA systems, including data modeling (LSA++, virtual data models), data extraction from various sources (SAP and non-SAP), transformations, and loading processes. - ABAP for BW/4HANA : Strong proficiency in ABAP programming, specifically for developing routines, transformations, and other custom logic within SAP BW/4HANA. - AMDP : Solid experience in developing and optimizing data transformations using ABAP Managed Database Procedures (AMDP) for performance enhancement. - Data Engineering Principles : Strong understanding of data warehousing concepts, data modeling techniques, ETL/ELT processes, and data quality principles. - Automation Tools : Hands-on experience with automation tools, preferably Automic, for scheduling and managing data workflows. - Hadoop (Beneficial) : Familiarity with Hadoop ecosystems and technologies (e.g., HDFS, Hive, Spark) and experience integrating with SAP BW/4HANA is a plus. - File Transfer Protocols : Experience working with various file transfer protocols (e.g., SFTP, FTP, HTTPS) for data exchange. - SQL : Strong SQL skills for data querying and analysis. - SAP BW Query Designer and BEx Analyzer (Beneficial) : Experience with SAP BW Query Designer and BEx Analyzer for creating and troubleshooting reports is a plus. - SAP HANA (Underlying Database) : Good understanding of SAP HANA as the underlying database for SAP BW/4HANA. Functional Skills : - Strong analytical and problem-solving skills with the ability to translate business requirements into technical data solutions. - Excellent communication skills, both written and verbal, with the ability to effectively communicate with technical and non-technical stakeholders in an international setting. - Proven ability to collaborate effectively within a geographically distributed team. - Experience working with project management tools like Micro Focus ALM and MF Service Manager for issue tracking and project lifecycle management. - Familiarity with collaboration tools such as Jira and Confluence. - Ability to work independently and manage tasks effectively in a remote environment. - Adaptability to different cultural norms and communication styles within a global team. Qualifications : - Bachelors or Masters Degree in Computer Science, Information Technology, Data Science, or a related field. - 5-10 years of professional experience as a Data Engineer with a focus on SAP BW/4HANA. - Proven track record of successful implementation and support of SAP BW/4HANA data solutions. - Strong understanding of data warehousing principles and best practices. - Excellent communication and collaboration skills in an international environment. Bonus Points : - SAP BW/4HANA certification. - Experience with other data warehousing technologies. - Knowledge of SAP Analytics Cloud (SAC) and its integration with SAP BW/4HANA. - Experience with agile development methodologies
Posted 1 day ago
5.0 - 10.0 years
4 - 8 Lacs
Kolkata
Remote
Designation / Position : Data Engineer SAP BW/4HANA Work Experience Required : 5-10 years Job Location(s) : Remote Industry Type : IT / Software / SAP Services Functional Area : Data Engineering / SAP BW/4HANA / Automation Qualification : Bachelors or Masters Degree in Computer Science, Information Technology, or related field Responsibilities : - Design, develop, and maintain high-performance data solutions within the SAP BW/4HANA environment, ensuring data quality and integrity. - Utilize ABAP and AMDP to develop efficient data extraction, transformation, and loading (ETL) processes within SAP BW/4HANA. - Optimize existing data models, data flows, and query performance in SAP BW/4HANA. - Implement and manage data automation workflows using tools such as Automic or similar scheduling platforms. - Support and troubleshoot data pipelines, which may include integration with Hadoop-based systems and other data sources. - Collaborate effectively with international team members across different time zones, participating in meetings and sharing knowledge. - Actively participate in all phases of the project lifecycle, from requirements gathering and design to development, testing, and deployment, utilizing tools like Micro Focus ALM and MF Service Manager for project tracking and issue management. - Write clear and concise technical documentation for developed data solutions and processes. - Work with File Transfer Protocols (e.g., SFTP, FTP) to manage data exchange with various systems. - Utilize collaboration tools such as Jira and Confluence for task management, knowledge sharing, and documentation. - Ensure adherence to data governance policies and standards. - Proactively identify and resolve performance bottlenecks and data quality issues within the SAP BW/4HANA system. - Stay up-to-date with the latest advancements in SAP BW/4HANA, data engineering technologies, and automation tools. - Contribute to the continuous improvement of our data engineering processes and methodologies. Technical Skills : - SAP BW/4HANA : Extensive hands-on experience in designing, developing, and administering SAP BW/4HANA systems, including data modeling (LSA++, virtual data models), data extraction from various sources (SAP and non-SAP), transformations, and loading processes. - ABAP for BW/4HANA : Strong proficiency in ABAP programming, specifically for developing routines, transformations, and other custom logic within SAP BW/4HANA. - AMDP : Solid experience in developing and optimizing data transformations using ABAP Managed Database Procedures (AMDP) for performance enhancement. - Data Engineering Principles : Strong understanding of data warehousing concepts, data modeling techniques, ETL/ELT processes, and data quality principles. - Automation Tools : Hands-on experience with automation tools, preferably Automic, for scheduling and managing data workflows. - Hadoop (Beneficial) : Familiarity with Hadoop ecosystems and technologies (e.g., HDFS, Hive, Spark) and experience integrating with SAP BW/4HANA is a plus. - File Transfer Protocols : Experience working with various file transfer protocols (e.g., SFTP, FTP, HTTPS) for data exchange. - SQL : Strong SQL skills for data querying and analysis. - SAP BW Query Designer and BEx Analyzer (Beneficial) : Experience with SAP BW Query Designer and BEx Analyzer for creating and troubleshooting reports is a plus. - SAP HANA (Underlying Database) : Good understanding of SAP HANA as the underlying database for SAP BW/4HANA. Functional Skills : - Strong analytical and problem-solving skills with the ability to translate business requirements into technical data solutions. - Excellent communication skills, both written and verbal, with the ability to effectively communicate with technical and non-technical stakeholders in an international setting. - Proven ability to collaborate effectively within a geographically distributed team. - Experience working with project management tools like Micro Focus ALM and MF Service Manager for issue tracking and project lifecycle management. - Familiarity with collaboration tools such as Jira and Confluence. - Ability to work independently and manage tasks effectively in a remote environment. - Adaptability to different cultural norms and communication styles within a global team. Qualifications : - Bachelors or Masters Degree in Computer Science, Information Technology, Data Science, or a related field. - 5-10 years of professional experience as a Data Engineer with a focus on SAP BW/4HANA. - Proven track record of successful implementation and support of SAP BW/4HANA data solutions. - Strong understanding of data warehousing principles and best practices. - Excellent communication and collaboration skills in an international environment. Bonus Points : - SAP BW/4HANA certification. - Experience with other data warehousing technologies. - Knowledge of SAP Analytics Cloud (SAC) and its integration with SAP BW/4HANA. - Experience with agile development methodologies
Posted 1 day ago
5.0 - 10.0 years
4 - 8 Lacs
Kanpur
Remote
Qualification : Bachelors or Masters Degree in Computer Science, Information Technology, or related field Responsibilities : - Design, develop, and maintain high-performance data solutions within the SAP BW/4HANA environment, ensuring data quality and integrity. - Utilize ABAP and AMDP to develop efficient data extraction, transformation, and loading (ETL) processes within SAP BW/4HANA. - Optimize existing data models, data flows, and query performance in SAP BW/4HANA. - Implement and manage data automation workflows using tools such as Automic or similar scheduling platforms. - Support and troubleshoot data pipelines, which may include integration with Hadoop-based systems and other data sources. - Collaborate effectively with international team members across different time zones, participating in meetings and sharing knowledge. - Actively participate in all phases of the project lifecycle, from requirements gathering and design to development, testing, and deployment, utilizing tools like Micro Focus ALM and MF Service Manager for project tracking and issue management. - Write clear and concise technical documentation for developed data solutions and processes. - Work with File Transfer Protocols (e.g., SFTP, FTP) to manage data exchange with various systems. - Utilize collaboration tools such as Jira and Confluence for task management, knowledge sharing, and documentation. - Ensure adherence to data governance policies and standards. - Proactively identify and resolve performance bottlenecks and data quality issues within the SAP BW/4HANA system. - Stay up-to-date with the latest advancements in SAP BW/4HANA, data engineering technologies, and automation tools. - Contribute to the continuous improvement of our data engineering processes and methodologies. Technical Skills : - SAP BW/4HANA : Extensive hands-on experience in designing, developing, and administering SAP BW/4HANA systems, including data modeling (LSA++, virtual data models), data extraction from various sources (SAP and non-SAP), transformations, and loading processes. - ABAP for BW/4HANA : Strong proficiency in ABAP programming, specifically for developing routines, transformations, and other custom logic within SAP BW/4HANA. - AMDP : Solid experience in developing and optimizing data transformations using ABAP Managed Database Procedures (AMDP) for performance enhancement. - Data Engineering Principles : Strong understanding of data warehousing concepts, data modeling techniques, ETL/ELT processes, and data quality principles. - Automation Tools : Hands-on experience with automation tools, preferably Automic, for scheduling and managing data workflows. - Hadoop (Beneficial) : Familiarity with Hadoop ecosystems and technologies (e.g., HDFS, Hive, Spark) and experience integrating with SAP BW/4HANA is a plus. - File Transfer Protocols : Experience working with various file transfer protocols (e.g., SFTP, FTP, HTTPS) for data exchange. - SQL : Strong SQL skills for data querying and analysis. - SAP BW Query Designer and BEx Analyzer (Beneficial) : Experience with SAP BW Query Designer and BEx Analyzer for creating and troubleshooting reports is a plus. - SAP HANA (Underlying Database) : Good understanding of SAP HANA as the underlying database for SAP BW/4HANA. Functional Skills : - Strong analytical and problem-solving skills with the ability to translate business requirements into technical data solutions. - Excellent communication skills, both written and verbal, with the ability to effectively communicate with technical and non-technical stakeholders in an international setting. - Proven ability to collaborate effectively within a geographically distributed team. - Experience working with project management tools like Micro Focus ALM and MF Service Manager for issue tracking and project lifecycle management. - Familiarity with collaboration tools such as Jira and Confluence. - Ability to work independently and manage tasks effectively in a remote environment. - Adaptability to different cultural norms and communication styles within a global team. Qualifications : - Bachelors or Masters Degree in Computer Science, Information Technology, Data Science, or a related field. - 5-10 years of professional experience as a Data Engineer with a focus on SAP BW/4HANA. - Proven track record of successful implementation and support of SAP BW/4HANA data solutions. - Strong understanding of data warehousing principles and best practices. - Excellent communication and collaboration skills in an international environment. Bonus Points : - SAP BW/4HANA certification. - Experience with other data warehousing technologies. - Knowledge of SAP Analytics Cloud (SAC) and its integration with SAP BW/4HANA. - Experience with agile development methodologies
Posted 1 day ago
5.0 - 10.0 years
4 - 8 Lacs
Agra
Remote
Qualification : Bachelors or Masters Degree in Computer Science, Information Technology, or related field Responsibilities : - Design, develop, and maintain high-performance data solutions within the SAP BW/4HANA environment, ensuring data quality and integrity. - Utilize ABAP and AMDP to develop efficient data extraction, transformation, and loading (ETL) processes within SAP BW/4HANA. - Optimize existing data models, data flows, and query performance in SAP BW/4HANA. - Implement and manage data automation workflows using tools such as Automic or similar scheduling platforms. - Support and troubleshoot data pipelines, which may include integration with Hadoop-based systems and other data sources. - Collaborate effectively with international team members across different time zones, participating in meetings and sharing knowledge. - Actively participate in all phases of the project lifecycle, from requirements gathering and design to development, testing, and deployment, utilizing tools like Micro Focus ALM and MF Service Manager for project tracking and issue management. - Write clear and concise technical documentation for developed data solutions and processes. - Work with File Transfer Protocols (e.g., SFTP, FTP) to manage data exchange with various systems. - Utilize collaboration tools such as Jira and Confluence for task management, knowledge sharing, and documentation. - Ensure adherence to data governance policies and standards. - Proactively identify and resolve performance bottlenecks and data quality issues within the SAP BW/4HANA system. - Stay up-to-date with the latest advancements in SAP BW/4HANA, data engineering technologies, and automation tools. - Contribute to the continuous improvement of our data engineering processes and methodologies. Technical Skills : - SAP BW/4HANA : Extensive hands-on experience in designing, developing, and administering SAP BW/4HANA systems, including data modeling (LSA++, virtual data models), data extraction from various sources (SAP and non-SAP), transformations, and loading processes. - ABAP for BW/4HANA : Strong proficiency in ABAP programming, specifically for developing routines, transformations, and other custom logic within SAP BW/4HANA. - AMDP : Solid experience in developing and optimizing data transformations using ABAP Managed Database Procedures (AMDP) for performance enhancement. - Data Engineering Principles : Strong understanding of data warehousing concepts, data modeling techniques, ETL/ELT processes, and data quality principles. - Automation Tools : Hands-on experience with automation tools, preferably Automic, for scheduling and managing data workflows. - Hadoop (Beneficial) : Familiarity with Hadoop ecosystems and technologies (e.g., HDFS, Hive, Spark) and experience integrating with SAP BW/4HANA is a plus. - File Transfer Protocols : Experience working with various file transfer protocols (e.g., SFTP, FTP, HTTPS) for data exchange. - SQL : Strong SQL skills for data querying and analysis. - SAP BW Query Designer and BEx Analyzer (Beneficial) : Experience with SAP BW Query Designer and BEx Analyzer for creating and troubleshooting reports is a plus. - SAP HANA (Underlying Database) : Good understanding of SAP HANA as the underlying database for SAP BW/4HANA. Functional Skills : - Strong analytical and problem-solving skills with the ability to translate business requirements into technical data solutions. - Excellent communication skills, both written and verbal, with the ability to effectively communicate with technical and non-technical stakeholders in an international setting. - Proven ability to collaborate effectively within a geographically distributed team. - Experience working with project management tools like Micro Focus ALM and MF Service Manager for issue tracking and project lifecycle management. - Familiarity with collaboration tools such as Jira and Confluence. - Ability to work independently and manage tasks effectively in a remote environment. - Adaptability to different cultural norms and communication styles within a global team. Qualifications : - Bachelors or Masters Degree in Computer Science, Information Technology, Data Science, or a related field. - 5-10 years of professional experience as a Data Engineer with a focus on SAP BW/4HANA. - Proven track record of successful implementation and support of SAP BW/4HANA data solutions. - Strong understanding of data warehousing principles and best practices. - Excellent communication and collaboration skills in an international environment. Bonus Points : - SAP BW/4HANA certification. - Experience with other data warehousing technologies. - Knowledge of SAP Analytics Cloud (SAC) and its integration with SAP BW/4HANA. - Experience with agile development methodologies
Posted 1 day ago
5.0 - 10.0 years
4 - 8 Lacs
Chennai
Remote
Qualification : Bachelors or Masters Degree in Computer Science, Information Technology, or related field Responsibilities : - Design, develop, and maintain high-performance data solutions within the SAP BW/4HANA environment, ensuring data quality and integrity. - Utilize ABAP and AMDP to develop efficient data extraction, transformation, and loading (ETL) processes within SAP BW/4HANA. - Optimize existing data models, data flows, and query performance in SAP BW/4HANA. - Implement and manage data automation workflows using tools such as Automic or similar scheduling platforms. - Support and troubleshoot data pipelines, which may include integration with Hadoop-based systems and other data sources. - Collaborate effectively with international team members across different time zones, participating in meetings and sharing knowledge. - Actively participate in all phases of the project lifecycle, from requirements gathering and design to development, testing, and deployment, utilizing tools like Micro Focus ALM and MF Service Manager for project tracking and issue management. - Write clear and concise technical documentation for developed data solutions and processes. - Work with File Transfer Protocols (e.g., SFTP, FTP) to manage data exchange with various systems. - Utilize collaboration tools such as Jira and Confluence for task management, knowledge sharing, and documentation. - Ensure adherence to data governance policies and standards. - Proactively identify and resolve performance bottlenecks and data quality issues within the SAP BW/4HANA system. - Stay up-to-date with the latest advancements in SAP BW/4HANA, data engineering technologies, and automation tools. - Contribute to the continuous improvement of our data engineering processes and methodologies. Technical Skills : - SAP BW/4HANA : Extensive hands-on experience in designing, developing, and administering SAP BW/4HANA systems, including data modeling (LSA++, virtual data models), data extraction from various sources (SAP and non-SAP), transformations, and loading processes. - ABAP for BW/4HANA : Strong proficiency in ABAP programming, specifically for developing routines, transformations, and other custom logic within SAP BW/4HANA. - AMDP : Solid experience in developing and optimizing data transformations using ABAP Managed Database Procedures (AMDP) for performance enhancement. - Data Engineering Principles : Strong understanding of data warehousing concepts, data modeling techniques, ETL/ELT processes, and data quality principles. - Automation Tools : Hands-on experience with automation tools, preferably Automic, for scheduling and managing data workflows. - Hadoop (Beneficial) : Familiarity with Hadoop ecosystems and technologies (e.g., HDFS, Hive, Spark) and experience integrating with SAP BW/4HANA is a plus. - File Transfer Protocols : Experience working with various file transfer protocols (e.g., SFTP, FTP, HTTPS) for data exchange. - SQL : Strong SQL skills for data querying and analysis. - SAP BW Query Designer and BEx Analyzer (Beneficial) : Experience with SAP BW Query Designer and BEx Analyzer for creating and troubleshooting reports is a plus. - SAP HANA (Underlying Database) : Good understanding of SAP HANA as the underlying database for SAP BW/4HANA. Functional Skills : - Strong analytical and problem-solving skills with the ability to translate business requirements into technical data solutions. - Excellent communication skills, both written and verbal, with the ability to effectively communicate with technical and non-technical stakeholders in an international setting. - Proven ability to collaborate effectively within a geographically distributed team. - Experience working with project management tools like Micro Focus ALM and MF Service Manager for issue tracking and project lifecycle management. - Familiarity with collaboration tools such as Jira and Confluence. - Ability to work independently and manage tasks effectively in a remote environment. - Adaptability to different cultural norms and communication styles within a global team. Qualifications : - Bachelors or Masters Degree in Computer Science, Information Technology, Data Science, or a related field. - 5-10 years of professional experience as a Data Engineer with a focus on SAP BW/4HANA. - Proven track record of successful implementation and support of SAP BW/4HANA data solutions. - Strong understanding of data warehousing principles and best practices. - Excellent communication and collaboration skills in an international environment. Bonus Points : - SAP BW/4HANA certification. - Experience with other data warehousing technologies. - Knowledge of SAP Analytics Cloud (SAC) and its integration with SAP BW/4HANA. - Experience with agile development methodologies
Posted 1 day ago
6.0 - 9.0 years
9 - 16 Lacs
Bengaluru
Work from Office
Experience in SQL and Azure Data Factory (ADF) and Data modeling is a must. • Experience in Logic Apps and Azure Integrations is nice to have. • Good communications skills. Need to connect with Stakeholders directly.
Posted 1 day ago
6.0 - 8.0 years
9 - 13 Lacs
Kolkata
Remote
Role Description : This is a remote contract role for a Data Engineer Ontology_5+yrs at Zorba AI. We are seeking a highly skilled Data Engineer with expertise in ontology development and knowledge graph implementation. This role will be pivotal in shaping our data infrastructure and ensuring the accurate representation and integration of complex data sets. You will leverage industry best practices, including the Basic Formal Ontology (BFO) and Common Core Ontologies (CCO), to design, develop, and maintain ontologies, semantic and syntactic data models, and knowledge graphs on the Databricks Data Intelligence Platform that drive data-driven decision-making and innovation within the company. Responsibilities : Ontology Development : - Design and implement ontologies based on BFO and CCO principles, ensuring alignment with business requirements and industry standards. - Collaborate with domain experts to capture and formalize domain knowledge into ontological structures. - Develop and maintain comprehensive ontologies to model various business entities, relationships, and processes. Data Modeling : - Design and implement semantic and syntactic data models that adhere to ontological principles. - Create data models that are scalable, flexible, and adaptable to changing business needs. - Integrate data models with existing data infrastructure and applications. Knowledge Graph Implementation : - Design and build knowledge graphs based on ontologies and data models. - Develop algorithms and tools for knowledge graph population, enrichment, and maintenance. - Utilize knowledge graphs to enable advanced analytics, search, and recommendation systems. Data Quality and Governance :. - Ensure the quality, accuracy, and consistency of ontologies, data models, and knowledge graphs. - Define and implement data governance processes and standards for ontology development and maintenance. Collaboration and Communication : - Work closely with data scientists, software engineers, and business stakeholders to understand their data requirements and provide tailored solutions. - Communicate complex technical concepts clearly and effectively to diverse audiences. Qualifications : Education : Bachelor's or master's degree in computer science, Data Science, or a related field. Experience : 5+ years of experience in data engineering or a related role. - Proven experience in ontology development using BFO and CCO or similar ontological frameworks. - Strong knowledge of semantic web technologies, including RDF, OWL, SPARQL, and SHACL. - Proficiency in Python, SQL, and other programming languages used for data engineering. - Experience with graph databases (e., TigerGraph, JanusGraph) and triple stores (e., GraphDB, Stardog) is a plus. Desired Skills : - Familiarity with machine learning and natural language processing techniques. - Experience with cloud-based data platforms (e., AWS, Azure, GCP). - Experience with Databricks technologies including Spark, Delta Lake, Iceberg, Unity Catalog, UniForm, and Photon. - Strong problem-solving and analytical skills. - Excellent communication and interpersonal skills.
Posted 1 day ago
3.0 - 5.0 years
5 - 7 Lacs
Bengaluru
Hybrid
Shift : (GMT+05:30) Asia/Kolkata (IST) What do you need for this opportunity? Must have skills required: Data Engineering, Big Data Technologies, Hadoop, Spark, Hive, Presto, Airflow, Data Modeling, Etl development, Data Lake Architecture, Python, Scala, GCP Big Query, Dataproc, Dataflow, Cloud Composer, AWS, Big Data Stack, Azure, GCP Wayfair is Looking for: About the job The Data Engineering team within the SMART org supports development of large-scale data pipelines for machine learning and analytical solutions related to unstructured and structured data. You'll have the opportunity to gain hands-on experience on all kinds of systems in the data platform ecosystem. Your work will have a direct impact on all applications that our millions of customers interact with every day: search results, homepage content, emails, auto-complete searches, browse pages and product carousels and build and scale data platforms that enable to measure the effectiveness of wayfairs ad-costs , media attribution that helps to decide on day to day or major marketing spends. About the Role: As a Data Engineer, you will be part of the Data Engineering team with this role being inherently multi-functional, and the ideal candidate will work with Data Scientist, Analysts, Application teams across the company, as well as all other Data Engineering squads at Wayfair. We are looking for someone with a love for data, understanding requirements clearly and the ability to iterate quickly. Successful candidates will have strong engineering skills and communication and a belief that data-driven processes lead to phenomenal products. What you'll do: Build and launch data pipelines, and data products focussed on SMART Org. Helping teams push the boundaries of insights, creating new product features using data, and powering machine learning models. Build cross-functional relationships to understand data needs, build key metrics and standardize their usage across the organization. Utilize current and leading edge technologies in software engineering, big data, streaming, and cloud infrastructure What You'll Need: Bachelor/Master degree in Computer Science or related technical subject area or equivalent combination of education and experience 3+ years relevant work experience in the Data Engineering field with web scale data sets. Demonstrated strength in data modeling, ETL development and data lake architecture. Data Warehousing Experience with Big Data Technologies (Hadoop, Spark, Hive, Presto, Airflow etc.). Coding proficiency in at least one modern programming language (Python, Scala, etc) Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing and query performance tuning skills of large data sets. Industry experience as a Big Data Engineer and working along cross functional teams such as Software Engineering, Analytics, Data Science with a track record of manipulating, processing, and extracting value from large datasets. Strong business acumen. Experience leading large-scale data warehousing and analytics projects, including using GCP technologies Big Query, Dataproc, GCS, Cloud Composer, Dataflow or related big data technologies in other cloud platforms like AWS, Azure etc. Be a team player and introduce/follow the best practices on the data engineering space. Ability to effectively communicate (both written and verbally) technical information and the results of engineering design at all levels of the organization. Good to have : Understanding of NoSQL Database exposure and Pub-Sub architecture setup. Familiarity with Bl tools like Looker, Tableau, AtScale, PowerBI, or any similar tools.
Posted 1 day ago
6.0 - 9.0 years
9 - 13 Lacs
Mumbai
Work from Office
Experience : 6+ years as Azure Data Engineer including at least 1 E2E Implementation in Microsoft Fabric. Responsibilities : - Lead the design and implementation of Microsoft Fabric-centric data platforms and data warehouses. - Develop and optimize ETL/ELT processes within the Microsoft Azure ecosystem, effectively utilizing relevant Fabric solutions. - Ensure data integrity, quality, and governance throughout Microsoft Fabric environment. - Collaborate with stakeholders to translate business needs into actionable data solutions. - Troubleshoot and optimize existing Fabric implementations for enhanced performance. Skills : - Solid foundational knowledge in data warehousing, ETL/ELT processes, and data modeling (dimensional, normalized). - Design and implement scalable and efficient data pipelines using Data Factory (Data Pipeline, Data Flow Gen 2 etc) in Fabric, Pyspark notebooks, Spark SQL, and Python. This includes data ingestion, data transformation, and data loading processes. - Experience ingesting data from SAP systems like SAP ECC/S4HANA/SAP BW etc will be a plus. - Nice to have ability to develop dashboards or reports using tools like Power BI. Coding Fluency : - Proficiency in SQL, Python, or other languages for data scripting, transformation, and automation.
Posted 1 day ago
5.0 - 7.0 years
5 - 9 Lacs
Mumbai
Work from Office
We are looking for a skilled Data Engineer with strong hands-on experience in Clickhouse, Kubernetes, SQL, Python, and FastAPI, along with a good understanding of PostgreSQL. The ideal candidate will be responsible for building and maintaining efficient data pipelines, optimizing query performance, and developing APIs to support scalable data services. - Design, build, and maintain scalable and efficient data pipelines and ETL processes. - Develop and optimize Clickhouse databases for high-performance analytics. - Create RESTful APIs using FastAPI to expose data services. - Work with Kubernetes for container orchestration and deployment of data services. - Write complex SQL queries to extract, transform, and analyze data from PostgreSQL and Clickhouse. - Collaborate with data scientists, analysts, and backend teams to support data needs and ensure data quality. - Monitor, troubleshoot, and improve performance of data infrastructure. - Strong experience in Clickhouse - data modeling, query optimization, performance tuning. - Expertise in SQL - including complex joins, window functions, and optimization. - Proficient in Python, especially for data processing (Pandas, NumPy) and scripting. - Experience with FastAPI for creating lightweight APIs and microservices. - Hands-on experience with PostgreSQL - schema design, indexing, and performance. - Solid knowledge of Kubernetes managing containers, deployments, and scaling. - Understanding of software engineering best practices (CI/CD, version control, testing). - Experience with cloud platforms like AWS, GCP, or Azure. - Knowledge of data warehousing and distributed data systems. - Familiarity with Docker, Helm, and monitoring tools like Prometheus/Grafana.
Posted 1 day ago
5.0 - 10.0 years
20 - 25 Lacs
Bengaluru
Work from Office
The Platform Data Engineer will be responsible for designing and implementing robust data platform architectures, integrating diverse data technologies, and ensuring scalability, reliability, performance, and security across the platform. The role involves setting up and managing infrastructure for data pipelines, storage, and processing, developing internal tools to enhance platform usability, implementing monitoring and observability, collaborating with software engineering teams for seamless integration, and driving capacity planning and cost optimization initiatives.
Posted 2 days ago
5.0 - 8.0 years
12 - 15 Lacs
Pune
Remote
Job Title: JAVA Developer Required Experience: 7+ years Job Overview: We are looking for a passionate Java developer with 7 years of experience to join our dynamic team. The ideal candidate should have a solid understanding of Java programming, experience with web frameworks, and a strong desire to develop efficient, scalable, and maintainable applications. Key Responsibilities: Design, develop, and maintain scalable and high-performance Java applications. Write clean, modular, and well-documented code that follows industry best practices. Collaborate with cross-functional teams to define, design, and implement new features. Debug, test, and troubleshoot applications across various platforms and environments. Participate in code reviews and contribute to the continuous improvement of development processes. Work with databases such as MySQL, and PostgreSQL to manage application data. Implement and maintain RESTful APIs for communication between services and front-end applications. Assist in optimizing application performance and scalability. Stay updated with emerging technologies and apply them in development projects when appropriate. Requirements: 2+ years of experience in Java development. Strong knowledge of Core Java, OOP concepts, struts, and Java SE/EE. Experience with Spring Framework (Spring Boot, Spring MVC) or Hibernate for developing web applications. Familiarity with RESTful APIs and web services. Proficiency in working with relational databases like MySQL or PostgreSQL. Familiarity with JavaScript, HTML5, and CSS3 for front-end integration. Basic knowledge of version control systems like Git. Experience with Agile/Scrum development methodologies. Understanding of unit testing frameworks such as JUnit or TestNG. Strong problem-solving and analytical skills. Experience with Kafka Experience with GCP Preferred Skills: Experience with DevOps tools like Docker, Kubernetes, or CI/CD pipelines. Familiarity with microservice architecture and containerization. Experience with NoSQL databases like MongoDB is a plus. Company Overview: Aventior is a leading provider of innovative technology solutions for businesses across a wide range of industries. At Aventior, we leverage cutting-edge technologies like AI, ML Ops, DevOps, and many more to help our clients solve complex business problems and drive growth. We also provide a full range of data development and management services, including Cloud Data Architecture, Universal Data Models, Data Transformation & and ETL, Data Lakes, User Management, Analytics and visualization, and automated data capture (for scanned documents and unstructured/semi-structured data sources). Our team of experienced professionals combines deep industry knowledge with expertise in the latest technologies to deliver customized solutions that meet the unique needs of each of our clients. Whether you are looking to streamline your operations, enhance your customer experience, or improve your decision-making process, Aventior has the skills and resources to help you achieve your goals. We bring a well-rounded cross-industry and multi-client perspective to our client engagements. Our strategy is grounded in design, implementation, innovation, migration, and support. We have a global delivery model, a multi-country presence, and a team well-equipped with professionals and experts in the field.
Posted 2 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
19947 Jobs | Dublin
Wipro
9475 Jobs | Bengaluru
EY
7894 Jobs | London
Accenture in India
6317 Jobs | Dublin 2
Amazon
6141 Jobs | Seattle,WA
Uplers
6077 Jobs | Ahmedabad
Oracle
5820 Jobs | Redwood City
IBM
5736 Jobs | Armonk
Tata Consultancy Services
3644 Jobs | Thane
Capgemini
3598 Jobs | Paris,France