Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
6.0 - 11.0 years
11 - 18 Lacs
Noida, Greater Noida, Delhi / NCR
Work from Office
Responsibilities: Design, develop, and maintain data pipelines and ETL processes using Domo. Collaborate with cross-functional teams to understand data requirements and deliver solutions. Implement data transformation and data warehousing solutions to support business intelligence and analytics. Optimize and troubleshoot data workflows to ensure efficiency and reliability. Develop and maintain documentation for data processes and systems. Ensure data quality and integrity through rigorous testing and validation. Monitor and manage data infrastructure to ensure optimal performance. Stay updated with industry trends and best practices in data engineering and Domo. Mandatory Skills- Domo, Data Transformation Layer (SQL, Python), Data Warehouse Layer (SQL, Python) Requirements: Bachelor's degree in computer science, Information Technology, or related field. Proven experience as a Data Engineer, with a strong focus on data transformation and data warehousing. Proficiency in Domo and its various tools and functionalities. Experience with SQL, Python, and other relevant programming languages. Strong understanding of ETL processes and data pipeline architecture. Excellent problem-solving skills and attention to detail. Ability to work independently and as part of a team. Strong communication skills to collaborate effectively with stakeholders. Preferred Qualifications: -Knowledge of data visualization and reporting tools. -Familiarity with Agile methodologies and project management tools. - Data Transformation Layer (SQL, Python) -Data Warehouse Layer (SQL, Python) Share your resume over Aarushi.Shukla@coforge.com
Posted 2 days ago
7.0 - 12.0 years
1 - 5 Lacs
Hyderabad, Pune
Hybrid
JD Data Engineering Candidate Expectation: Hands-on knowledge in Spark or Scala with Kafka It is good to have worked with the AWS ecosystem
Posted 2 days ago
5.0 - 10.0 years
8 - 13 Lacs
Mumbai
Remote
Qualification : Bachelors or Masters Degree in Computer Science, Information Technology, or related field Responsibilities : - Design, develop, and maintain high-performance data solutions within the SAP BW/4HANA environment, ensuring data quality and integrity. - Utilize ABAP and AMDP to develop efficient data extraction, transformation, and loading (ETL) processes within SAP BW/4HANA. - Optimize existing data models, data flows, and query performance in SAP BW/4HANA. - Implement and manage data automation workflows using tools such as Automic or similar scheduling platforms. - Support and troubleshoot data pipelines, which may include integration with Hadoop-based systems and other data sources. - Collaborate effectively with international team members across different time zones, participating in meetings and sharing knowledge. - Actively participate in all phases of the project lifecycle, from requirements gathering and design to development, testing, and deployment, utilizing tools like Micro Focus ALM and MF Service Manager for project tracking and issue management. - Write clear and concise technical documentation for developed data solutions and processes. - Work with File Transfer Protocols (e.g., SFTP, FTP) to manage data exchange with various systems. - Utilize collaboration tools such as Jira and Confluence for task management, knowledge sharing, and documentation. - Ensure adherence to data governance policies and standards. - Proactively identify and resolve performance bottlenecks and data quality issues within the SAP BW/4HANA system. - Stay up-to-date with the latest advancements in SAP BW/4HANA, data engineering technologies, and automation tools. - Contribute to the continuous improvement of our data engineering processes and methodologies. Technical Skills : - SAP BW/4HANA : Extensive hands-on experience in designing, developing, and administering SAP BW/4HANA systems, including data modeling (LSA++, virtual data models), data extraction from various sources (SAP and non-SAP), transformations, and loading processes. - ABAP for BW/4HANA : Strong proficiency in ABAP programming, specifically for developing routines, transformations, and other custom logic within SAP BW/4HANA. - AMDP : Solid experience in developing and optimizing data transformations using ABAP Managed Database Procedures (AMDP) for performance enhancement. - Data Engineering Principles : Strong understanding of data warehousing concepts, data modeling techniques, ETL/ELT processes, and data quality principles. - Automation Tools : Hands-on experience with automation tools, preferably Automic, for scheduling and managing data workflows. - Hadoop (Beneficial) : Familiarity with Hadoop ecosystems and technologies (e.g., HDFS, Hive, Spark) and experience integrating with SAP BW/4HANA is a plus. - File Transfer Protocols : Experience working with various file transfer protocols (e.g., SFTP, FTP, HTTPS) for data exchange. - SQL : Strong SQL skills for data querying and analysis. - SAP BW Query Designer and BEx Analyzer (Beneficial) : Experience with SAP BW Query Designer and BEx Analyzer for creating and troubleshooting reports is a plus. - SAP HANA (Underlying Database) : Good understanding of SAP HANA as the underlying database for SAP BW/4HANA. Functional Skills : - Strong analytical and problem-solving skills with the ability to translate business requirements into technical data solutions. - Excellent communication skills, both written and verbal, with the ability to effectively communicate with technical and non-technical stakeholders in an international setting. - Proven ability to collaborate effectively within a geographically distributed team. - Experience working with project management tools like Micro Focus ALM and MF Service Manager for issue tracking and project lifecycle management. - Familiarity with collaboration tools such as Jira and Confluence. - Ability to work independently and manage tasks effectively in a remote environment. - Adaptability to different cultural norms and communication styles within a global team. Qualifications : - Bachelors or Masters Degree in Computer Science, Information Technology, Data Science, or a related field. - 5-10 years of professional experience as a Data Engineer with a focus on SAP BW/4HANA. - Proven track record of successful implementation and support of SAP BW/4HANA data solutions. - Strong understanding of data warehousing principles and best practices. - Excellent communication and collaboration skills in an international environment. Bonus Points : - SAP BW/4HANA certification. - Experience with other data warehousing technologies. - Knowledge of SAP Analytics Cloud (SAC) and its integration with SAP BW/4HANA. - Experience with agile development methodologies
Posted 2 days ago
6.0 - 8.0 years
9 - 13 Lacs
Chennai
Work from Office
This is a remote contract role for a Data Engineer Ontology_5+yrs at Zorba AI. We are seeking a highly skilled Data Engineer with expertise in ontology development and knowledge graph implementation. This role will be pivotal in shaping our data infrastructure and ensuring the accurate representation and integration of complex data sets. You will leverage industry best practices, including the Basic Formal Ontology (BFO) and Common Core Ontologies (CCO), to design, develop, and maintain ontologies, semantic and syntactic data models, and knowledge graphs on the Databricks Data Intelligence Platform that drive data-driven decision-making and innovation within the company. Responsibilities : Ontology Development : - Design and implement ontologies based on BFO and CCO principles, ensuring alignment with business requirements and industry standards. - Collaborate with domain experts to capture and formalize domain knowledge into ontological structures. - Develop and maintain comprehensive ontologies to model various business entities, relationships, and processes. Data Modeling : - Design and implement semantic and syntactic data models that adhere to ontological principles. - Create data models that are scalable, flexible, and adaptable to changing business needs. - Integrate data models with existing data infrastructure and applications. Knowledge Graph Implementation : - Design and build knowledge graphs based on ontologies and data models. - Develop algorithms and tools for knowledge graph population, enrichment, and maintenance. - Utilize knowledge graphs to enable advanced analytics, search, and recommendation systems. Data Quality and Governance :. - Ensure the quality, accuracy, and consistency of ontologies, data models, and knowledge graphs. - Define and implement data governance processes and standards for ontology development and maintenance. Collaboration and Communication : - Work closely with data scientists, software engineers, and business stakeholders to understand their data requirements and provide tailored solutions. - Communicate complex technical concepts clearly and effectively to diverse audiences. Qualifications : Education : Bachelor's or master's degree in computer science, Data Science, or a related field. Experience : 5+ years of experience in data engineering or a related role. - Proven experience in ontology development using BFO and CCO or similar ontological frameworks. - Strong knowledge of semantic web technologies, including RDF, OWL, SPARQL, and SHACL. - Proficiency in Python, SQL, and other programming languages used for data engineering. - Experience with graph databases (e., TigerGraph, JanusGraph) and triple stores (e., GraphDB, Stardog) is a plus. Desired Skills : - Familiarity with machine learning and natural language processing techniques. - Experience with cloud-based data platforms (e., AWS, Azure, GCP). - Experience with Databricks technologies including Spark, Delta Lake, Iceberg, Unity Catalog, UniForm, and Photon. - Strong problem-solving and analytical skills. - Excellent communication and interpersonal skills.
Posted 2 days ago
3.0 - 6.0 years
9 - 13 Lacs
Mumbai
Work from Office
About the job : - As a Mid Databricks Engineer, you will play a pivotal role in designing, implementing, and optimizing data processing pipelines and analytics solutions on the Databricks platform. - You will collaborate closely with cross-functional teams to understand business requirements, architect scalable solutions, and ensure the reliability and performance of our data infrastructure. - This role requires deep expertise in Databricks, strong programming skills, and a passion for solving complex engineering challenges. What You'll Do : - Design and develop data processing pipelines and analytics solutions using Databricks. - Architect scalable and efficient data models and storage solutions on the Databricks platform. - Collaborate with architects and other teams to migrate current solution to use Databricks. - Optimize performance and reliability of Databricks clusters and jobs to meet SLAs and business requirements. - Use best practices for data governance, security, and compliance on the Databricks platform. - Mentor junior engineers and provide technical guidance. - Stay current with emerging technologies and trends in data engineering and analytics to drive continuous improvement. You'll Be Expected To Have : - Bachelor's or Master's degree in Computer Science, Engineering, or a related field. - 3 to 6 years of overall experience and 2+ years of experience designing and implementing data solutions on the Databricks platform. - Proficiency in programming languages such as Python, Scala, or SQL. - Strong understanding of distributed computing principles and experience with big data technologies such as Apache Spark. - Experience with cloud platforms such as AWS, Azure, or GCP, and their associated data services. - Proven track record of delivering scalable and reliable data solutions in a fast-paced environment. - Excellent problem-solving skills and attention to detail. - Strong communication and collaboration skills with the ability to work effectively in cross-functional teams. - Good to have experience with containerization technologies such as Docker and Kubernetes. - Knowledge of DevOps practices for automated deployment and monitoring of data pipelines.
Posted 2 days ago
3.0 - 8.0 years
15 - 19 Lacs
Mumbai
Hybrid
Responsibilities : - Develop and maintain data pipelines using GCP. - Write and optimize queries in BigQuery. - Utilize Python for data processing tasks. - Manage and maintain SQL Server databases. Must-Have Skills : - Experience with Google Cloud Platform (GCP). - Proficiency in BigQuery query writing. - Strong Python programming skills. - Expertise in SQL Server. Good to Have : - Knowledge of MLOps practices. - Experience with Vertex AI. - Background in data science. - Familiarity with any data visualization tool
Posted 2 days ago
5.0 - 7.0 years
11 - 15 Lacs
Coimbatore
Work from Office
Mandate Skills : Apache spark, hive, Hadoop, spark, scala, Databricks The Role : - Designing and building optimized data pipelines using cutting-edge technologies in a cloud environment to drive analytical insights. - Constructing infrastructure for efficient ETL processes from various sources and storage systems. - Leading the implementation of algorithms and prototypes to transform raw data into useful information. - Architecting, designing, and maintaining database pipeline architectures, ensuring readiness for AI/ML transformations. - Creating innovative data validation methods and data analysis tools. - Ensuring compliance with data governance and security policies. - Interpreting data trends and patterns to establish operational alerts. - Developing analytical tools, programs, and reporting mechanisms - Conducting complex data analysis and presenting results effectively. - Preparing data for prescriptive and predictive modeling. - Continuously exploring opportunities to enhance data quality and reliability. - Applying strong programming and problem-solving skills to develop scalable solutions. Requirements : - Experience in the Big Data technologies (Hadoop, Spark, Nifi, Impala) - 5+ years of hands-on experience designing, building, deploying, testing, maintaining, monitoring, and owning scalable, resilient, and distributed data pipelines. - High proficiency in Scala/Java and Spark for applied large-scale data processing. - Expertise with big data technologies, including Spark, Data Lake, and Hive
Posted 2 days ago
5.0 - 7.0 years
9 - 13 Lacs
Bengaluru
Work from Office
We are looking for an experienced SSAS Data Engineer with strong expertise in SSAS (Tabular and/or Multidimensional Models), SQL, MDX/DAX, and data modeling. The ideal candidate will have a solid background in designing and developing BI solutions, working with large datasets, and building scalable SSAS cubes for reporting and analytics. Experience with ETL processes and reporting tools like Power BI is a strong plus. Key Responsibilities - Design, develop, and maintain SSAS models (Tabular and/or Multidimensional). - Build and optimize MDX or DAX queries for advanced reporting needs. - Create and manage data models (Star/Snowflake schemas) supporting business KPIs. - Develop and maintain ETL pipelines for efficient data ingestion (preferably using SSIS or similar tools). - Implement KPIs, aggregations, partitioning, and performance tuning in SSAS cubes. - Collaborate with data analysts, business stakeholders, and Power BI teams to deliver accurate and insightful reporting solutions. - Maintain data quality and consistency across data sources and reporting layers. - Implement RLS/OLS and manage report security and governance in SSAS and Power BI. Primary : Required Skills : - SSAS Tabular & Multidimensional. - SQL Server (Advanced SQL, Views, Joins, Indexes). - DAX & MDX. - Data Modeling & OLAP concepts. Secondary : - ETL Tools (SSIS or equivalent). - Power BI or similar BI/reporting tools. - Performance tuning & troubleshooting in SSAS and SQL. - Version control (TFS/Git), deployment best practices.
Posted 2 days ago
15.0 - 20.0 years
5 - 9 Lacs
Bengaluru
Work from Office
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Neo4j Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time educationDevOps EngineerA DevOps engineer in the platform teams should have the following experience and expertise:- Experience with Azure cloud infrastructure deployment, configuration and maintaining via YAML/bicep/Terraform:o DatabricksAzure devopsBicepo VNetso Virtual Machineso App serviceso Storageo Container apps- Using Azure DevOps Board to manage the CI/CD pipelines and our git repos- Automated monitoring and incident management- Experience with (complex) azure infrastructure- Improve and maintain a high level of security and compliancy for the infra structure- Knowledge about infrastructure as code(IAAC)- Solid communication skills to ensure ideas and opinion can be shared easilyNice to have:- ETL knowledge (data engineering) to be able to support and help our platform customers with their solutions- Worked with Neo4J, that is the network database type hosted by the team- Works in Bangalore, this to increase team feeling even more- Experience working with Docke Qualification 15 years full time education
Posted 2 days ago
15.0 - 20.0 years
5 - 9 Lacs
Bengaluru
Work from Office
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Neo4j Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time educationA DevOps engineer in the platform teams should have the following experience and expertise:- Experience with Azure cloud infrastructure deployment, configuration and maintaining via YAML/bicep/Terraform:o Databrickso VNetso Virtual Machines Bicepo App serviceso Storageo Container appsAzure devops- Using Azure DevOps Board to manage the CI/CD pipelines and our git repos- Automated monitoring and incident management- Experience with (complex) azure infrastructure- Improve and maintain a high level of security and compliancy for the infra structure- Knowledge about infrastructure as code(IAAC)- Solid communication skills to ensure ideas and opinion can be shared easilyNice to have:- ETL knowledge (data engineering) to be able to support and help our platform customers with their solutions- Worked with Neo4J, that is the network database type hosted by the team- Works in Bangalore, this to increase team feeling even more- Experience working with Docker Qualification 15 years full time education
Posted 2 days ago
5.0 - 10.0 years
4 - 8 Lacs
Bengaluru
Work from Office
Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Snowflake Data Warehouse Good to have skills : Data Building Tool, Python (Programming Language)Minimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will be responsible for designing, developing, and maintaining data solutions for data generation, collection, and processing. You will create data pipelines, ensure data quality, and implement ETL processes to migrate and deploy data across systems. Your day will involve working on data solutions and collaborating with teams to optimize data processes. Roles & Responsibilities:- Expected to be an SME- Collaborate and manage the team to perform- Responsible for team decisions- Engage with multiple teams and contribute on key decisions- Provide solutions to problems for their immediate team and across multiple teams- Develop and maintain data pipelines- Ensure data quality and integrity- Implement ETL processes Professional & Technical Skills: - Must To Have Skills: Proficiency in Snowflake Data Warehouse- Good To Have Skills: Experience with Data Building Tool- Strong understanding of data architecture- Proficiency in SQL and database management- Experience with cloud data platforms- Knowledge of data modeling Additional Information:- The candidate should have a minimum of 5 years of experience in Snowflake Data Warehouse- This position is based at our Bengaluru office- A 15 years full time education is required Qualification 15 years full time education
Posted 2 days ago
15.0 - 20.0 years
5 - 9 Lacs
Pune
Work from Office
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Data Engineering Good to have skills : Oracle Procedural Language Extensions to SQL (PLSQL), AWS Redshift, TableauMinimum 12 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. You will be responsible for ensuring that the applications are developed according to the specified requirements and standards, and that they are delivered on time and within budget. Your typical day will involve collaborating with the team to design and develop applications, configuring and customizing applications based on business needs, and troubleshooting and resolving any issues that arise during the development process. You will also be involved in testing and deploying applications, as well as providing support and maintenance for existing applications. Roles & Responsibilities:- Excellent SQL skills with experience in building and interpreting complex queries and Create logical and physical data models and Experience in advanced SQL programming- Advanced working SQL knowledge and experience working with relational databases, and query authoring (SQL)- Must experience to design, code, test, and analyze applications leveraging RDBMS (Redshift, MySQL & MS SQL SERVER databases).- Assist Delivery and Operations team with customization requests and technical feasibility responses to the clients- Expert experience with performance tuning and optimization and stored procedures- Work with BRMs directly for Planning, Solutioning, Assessment, Urgent issues and Consultation. Represent BRM in meetings when there is time conflict or unavailable.- Making sure resolve all blocker so offshore operations run smooth at offshore time.- Provide a better understanding to offshore team, resolve conflict and understanding gaps. Dealing with cultural differences and making communication is easier.- Taking initiatives, continuous improvement & drive best practices that has worked well in the past.- Building bridge outside the project boundary and helping other Vendors like PWC, DK, Beghou to work together to achieve client deliverables.- Build stand operations process & continuous improvement to help EISAI IT and business to make decision Professional & Technical Skills: - Resource should have experience in Data Engineering, Data Quality, AWS Redshift, SQL, Tableau, Enterprise Data Warehouse, Jira, Service Now, Confluence, UNIX shell scripting, Python- Must To Have Skills: Proficiency in Data Engineering.- Good To Have Skills: Experience with Oracle Procedural Language Extensions to SQL (PLSQL), AWS Redshift, Tableau.- Strong understanding of statistical analysis and machine learning algorithms.- Experience with data visualization tools such as Tableau or Power BI.- Hands-on implementing various machine learning algorithms such as linear regression, logistic regression, decision trees, and clustering algorithms.- Solid grasp of data munging techniques, including data cleaning, transformation, and normalization to ensure data quality and integrity. Functional/Industry skills:- LS-Pharma on commercial datasets Additional Information:- The candidate should have a minimum of 12 years of experience in Data Engineering.- This position is based at our Pune office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 2 days ago
5.0 - 10.0 years
5 - 9 Lacs
Bengaluru
Work from Office
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : Microsoft Azure Databricks, PySpark, Core JavaMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will be responsible for designing, building, and configuring applications to meet business process and application requirements. Your typical day will involve collaborating with teams to develop innovative solutions and streamline processes. Roles & Responsibilities:- Expected to be an SME- Collaborate and manage the team to perform- Responsible for team decisions- Engage with multiple teams and contribute on key decisions- Provide solutions to problems for their immediate team and across multiple teams- Lead the development and implementation of new applications- Conduct code reviews and ensure coding standards are met- Stay updated on industry trends and best practices Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform- Good To Have Skills: Experience with PySpark- Strong understanding of data engineering concepts- Experience in building and optimizing data pipelines- Knowledge of cloud platforms like Microsoft Azure- Familiarity with data governance and security practices Additional Information:- The candidate should have a minimum of 5 years of experience in Databricks Unified Data Analytics Platform- This position is based at our Bengaluru office- A 15 years full-time education is required Qualification 15 years full time education
Posted 2 days ago
10.0 - 14.0 years
10 - 15 Lacs
Hyderabad
Work from Office
Proven expert at writing SQL code with at least 10 years of experience. Must have 5+ years of experience working with large data with transactions in the order of 5 10M records. 5+ years of experience modeling loosely coupled relational databases that can store tera or petabytes of data. 3+ years of proven expertise in working with large Data Warehouses. Expert at ETL transformations using SSIS.
Posted 2 days ago
5.0 - 7.0 years
12 - 15 Lacs
Mumbai, New Delhi, Bengaluru
Work from Office
We are looking for a skilled Data Engineer with expertise in SSIS, Tableau, SQL, and ETL processes. The ideal candidate should have experience in Data Modeling, Data Pipelines, and Agile methodologies. Responsibilities include designing and maintaining data pipelines, implementing ETL processes using SSIS, optimizing data models for reporting, and developing advanced dashboards in Tableau. The role requires proficiency in SQL for complex data transformations, troubleshooting data workflows, and ensuring data integrity and compliance. Strong problem-solving skills, Agile collaboration experience, and the ability to work independently in a remote setup are essential. Location-Remote,Delhi NCR,Bangalore,Chennai,Pune,Kolkata,Ahmedabad,Mumbai,Hyderabad
Posted 2 days ago
8.0 - 13.0 years
15 - 19 Lacs
Chennai
Work from Office
Project Role : Technology Architect Project Role Description : Review and integrate all application requirements, including functional, security, integration, performance, quality and operations requirements. Review and integrate the technical architecture requirements. Provide input into final decisions regarding hardware, network products, system software and security. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : Cloud Data ArchitectureMinimum 5 year(s) of experience is required Educational Qualification : BE or MCA Summary :As a Technology Architect, you will be responsible for reviewing and integrating all application requirements, including functional, security, integration, performance, quality, and operations requirements. Your typical day will involve reviewing and integrating technical architecture requirements, providing input into final decisions regarding hardware, network products, system software, and security. Roles & ResponsibilitiesShould have a minimum of 8 years of experience in Databricks Unified Data Analytics Platform. Good experience in implementing data ingestion pipelines from multiple sources and creating end to end data pipeline on Databricks platform Should have strong educational background in technology and information architectures, along with a proven track record of delivering impactful data-driven solutions. Strong requirement analysis and technical solutioning skill in Data and Analytics Client facing role in terms of running solution workshops, client visits, handled large RFP pursuits and managed multiple stakeholders. Technical Experience6 or more years of experience in implementing data ingestion pipelines from multiple sources and creating end to end data pipeline on Databricks platform. 2 or more years of experience using Python, PySpark or Scala. Experience in Databricks on cloud. Exp in any of AWS, Azure or GCPe, ETL, data engineering, data cleansing and insertion into a data warehouse Must have Skills like Databricks, Cloud Data Architecture, Python Programming Language, Data Engineering. Professional AttributesExcellent writing, communication and presentation skills. Eagerness to learn and develop self on an ongoing basis. Excellent client facing and interpersonal skills. Qualification BE or MCA
Posted 2 days ago
5.0 - 10.0 years
5 - 9 Lacs
Pune
Work from Office
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : Graduate Project Role :Data Platform Engineer Project Role Description :Assists with the data platform blueprint and design, encompassing the relevant data platform components. Collaborates with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Must have Skills :Databricks Unified Data Analytics Platform, SSINON SSI:Good to Have Skills :SSI:No Technology Specialization NON SSI :Job :Key Responsibilities :1 Show a strong development skill in Pyspark and Databrick sto build complex data pipelines 2 Should be able to deliver the development task assigned independently or with small help 3 Should be able to participate in daily status calls and have good communication skills to manage day to day work Technical Experience :1 Should have more than 5 years of experience in IT 2. Should have more than 2 years of experience in technologies like Databricks and Pyspark 3 Should be able to build end to end pipelines using Pyspark with good knowledge on Delta Lake 4 Should have good knowledge on Azure services like Azure Data Factory, Azure storage solutions like ADLS, Delta Lake, Azure AD Professional Attributes :1 Should have involved in data engineering project from requirements phase to delivery 2 Good communication skill to interact with client and understand the requirement 3 Should have capability to work independently and guide the team Educational Qualification:GraduateAdditional Info :Skill Flex for Pyspark, only Bengaluru, Should be flexible to work form Client Office Qualification Graduate
Posted 2 days ago
5.0 - 10.0 years
4 - 8 Lacs
Patna
Work from Office
Qualification : Bachelors or Masters Degree in Computer Science, Information Technology, or related field Responsibilities : - Design, develop, and maintain high-performance data solutions within the SAP BW/4HANA environment, ensuring data quality and integrity. - Utilize ABAP and AMDP to develop efficient data extraction, transformation, and loading (ETL) processes within SAP BW/4HANA. - Optimize existing data models, data flows, and query performance in SAP BW/4HANA. - Implement and manage data automation workflows using tools such as Automic or similar scheduling platforms. - Support and troubleshoot data pipelines, which may include integration with Hadoop-based systems and other data sources. - Collaborate effectively with international team members across different time zones, participating in meetings and sharing knowledge. - Actively participate in all phases of the project lifecycle, from requirements gathering and design to development, testing, and deployment, utilizing tools like Micro Focus ALM and MF Service Manager for project tracking and issue management. - Write clear and concise technical documentation for developed data solutions and processes. - Work with File Transfer Protocols (e.g., SFTP, FTP) to manage data exchange with various systems. - Utilize collaboration tools such as Jira and Confluence for task management, knowledge sharing, and documentation. - Ensure adherence to data governance policies and standards. - Proactively identify and resolve performance bottlenecks and data quality issues within the SAP BW/4HANA system. - Stay up-to-date with the latest advancements in SAP BW/4HANA, data engineering technologies, and automation tools. - Contribute to the continuous improvement of our data engineering processes and methodologies. Technical Skills : - SAP BW/4HANA : Extensive hands-on experience in designing, developing, and administering SAP BW/4HANA systems, including data modeling (LSA++, virtual data models), data extraction from various sources (SAP and non-SAP), transformations, and loading processes. - ABAP for BW/4HANA : Strong proficiency in ABAP programming, specifically for developing routines, transformations, and other custom logic within SAP BW/4HANA. - AMDP : Solid experience in developing and optimizing data transformations using ABAP Managed Database Procedures (AMDP) for performance enhancement. - Data Engineering Principles : Strong understanding of data warehousing concepts, data modeling techniques, ETL/ELT processes, and data quality principles. - Automation Tools : Hands-on experience with automation tools, preferably Automic, for scheduling and managing data workflows. - Hadoop (Beneficial) : Familiarity with Hadoop ecosystems and technologies (e.g., HDFS, Hive, Spark) and experience integrating with SAP BW/4HANA is a plus. - File Transfer Protocols : Experience working with various file transfer protocols (e.g., SFTP, FTP, HTTPS) for data exchange. - SQL : Strong SQL skills for data querying and analysis. - SAP BW Query Designer and BEx Analyzer (Beneficial) : Experience with SAP BW Query Designer and BEx Analyzer for creating and troubleshooting reports is a plus. - SAP HANA (Underlying Database) : Good understanding of SAP HANA as the underlying database for SAP BW/4HANA. Functional Skills : - Strong analytical and problem-solving skills with the ability to translate business requirements into technical data solutions. - Excellent communication skills, both written and verbal, with the ability to effectively communicate with technical and non-technical stakeholders in an international setting. - Proven ability to collaborate effectively within a geographically distributed team. - Experience working with project management tools like Micro Focus ALM and MF Service Manager for issue tracking and project lifecycle management. - Familiarity with collaboration tools such as Jira and Confluence. - Ability to work independently and manage tasks effectively in a remote environment. - Adaptability to different cultural norms and communication styles within a global team. Qualifications : - Bachelors or Masters Degree in Computer Science, Information Technology, Data Science, or a related field. - 5-10 years of professional experience as a Data Engineer with a focus on SAP BW/4HANA. - Proven track record of successful implementation and support of SAP BW/4HANA data solutions. - Strong understanding of data warehousing principles and best practices. - Excellent communication and collaboration skills in an international environment. Bonus Points : - SAP BW/4HANA certification. - Experience with other data warehousing technologies. - Knowledge of SAP Analytics Cloud (SAC) and its integration with SAP BW/4HANA. - Experience with agile development methodologies
Posted 2 days ago
5.0 - 10.0 years
4 - 8 Lacs
Surat
Work from Office
Qualification : Bachelors or Masters Degree in Computer Science, Information Technology, or related field Responsibilities : - Design, develop, and maintain high-performance data solutions within the SAP BW/4HANA environment, ensuring data quality and integrity. - Utilize ABAP and AMDP to develop efficient data extraction, transformation, and loading (ETL) processes within SAP BW/4HANA. - Optimize existing data models, data flows, and query performance in SAP BW/4HANA. - Implement and manage data automation workflows using tools such as Automic or similar scheduling platforms. - Support and troubleshoot data pipelines, which may include integration with Hadoop-based systems and other data sources. - Collaborate effectively with international team members across different time zones, participating in meetings and sharing knowledge. - Actively participate in all phases of the project lifecycle, from requirements gathering and design to development, testing, and deployment, utilizing tools like Micro Focus ALM and MF Service Manager for project tracking and issue management. - Write clear and concise technical documentation for developed data solutions and processes. - Work with File Transfer Protocols (e.g., SFTP, FTP) to manage data exchange with various systems. - Utilize collaboration tools such as Jira and Confluence for task management, knowledge sharing, and documentation. - Ensure adherence to data governance policies and standards. - Proactively identify and resolve performance bottlenecks and data quality issues within the SAP BW/4HANA system. - Stay up-to-date with the latest advancements in SAP BW/4HANA, data engineering technologies, and automation tools. - Contribute to the continuous improvement of our data engineering processes and methodologies. Technical Skills : - SAP BW/4HANA : Extensive hands-on experience in designing, developing, and administering SAP BW/4HANA systems, including data modeling (LSA++, virtual data models), data extraction from various sources (SAP and non-SAP), transformations, and loading processes. - ABAP for BW/4HANA : Strong proficiency in ABAP programming, specifically for developing routines, transformations, and other custom logic within SAP BW/4HANA. - AMDP : Solid experience in developing and optimizing data transformations using ABAP Managed Database Procedures (AMDP) for performance enhancement. - Data Engineering Principles : Strong understanding of data warehousing concepts, data modeling techniques, ETL/ELT processes, and data quality principles. - Automation Tools : Hands-on experience with automation tools, preferably Automic, for scheduling and managing data workflows. - Hadoop (Beneficial) : Familiarity with Hadoop ecosystems and technologies (e.g., HDFS, Hive, Spark) and experience integrating with SAP BW/4HANA is a plus. - File Transfer Protocols : Experience working with various file transfer protocols (e.g., SFTP, FTP, HTTPS) for data exchange. - SQL : Strong SQL skills for data querying and analysis. - SAP BW Query Designer and BEx Analyzer (Beneficial) : Experience with SAP BW Query Designer and BEx Analyzer for creating and troubleshooting reports is a plus. - SAP HANA (Underlying Database) : Good understanding of SAP HANA as the underlying database for SAP BW/4HANA. Functional Skills : - Strong analytical and problem-solving skills with the ability to translate business requirements into technical data solutions. - Excellent communication skills, both written and verbal, with the ability to effectively communicate with technical and non-technical stakeholders in an international setting. - Proven ability to collaborate effectively within a geographically distributed team. - Experience working with project management tools like Micro Focus ALM and MF Service Manager for issue tracking and project lifecycle management. - Familiarity with collaboration tools such as Jira and Confluence. - Ability to work independently and manage tasks effectively in a remote environment. - Adaptability to different cultural norms and communication styles within a global team. Qualifications : - Bachelors or Masters Degree in Computer Science, Information Technology, Data Science, or a related field. - 5-10 years of professional experience as a Data Engineer with a focus on SAP BW/4HANA. - Proven track record of successful implementation and support of SAP BW/4HANA data solutions. - Strong understanding of data warehousing principles and best practices. - Excellent communication and collaboration skills in an international environment. Bonus Points : - SAP BW/4HANA certification. - Experience with other data warehousing technologies. - Knowledge of SAP Analytics Cloud (SAC) and its integration with SAP BW/4HANA. - Experience with agile development methodologies
Posted 2 days ago
3.0 - 6.0 years
5 - 9 Lacs
Bengaluru
Work from Office
Duration : 6 Months Timings : General IST Notice Period : within 15 days or immediate joiner About the Role : As a Data Engineer for the Data Science team, you will play a pivotal role in enriching and maintaining the organization's central repository of datasets. This repository serves as the backbone for advanced data analytics and machine learning applications, enabling actionable insights from financial and market data. You will work closely with cross-functional teams to design and implement robust ETL pipelines that automate data updates and ensure accessibility across the organization. This is a critical role requiring technical expertise in building scalable data pipelines, ensuring data quality, and supporting data analytics and reporting infrastructure for business growth. Note : Must be ready for face-to-face interview in Bangalore (last round) - Should be working with Azure as cloud technology. Key Responsibilities : ETL Development : - Design, develop, and maintain efficient ETL processes for handling multi-scale datasets. - Implement and optimize data transformation and validation processes to ensure data accuracy and consistency. - Collaborate with cross-functional teams to gather data requirements and translate business logic into ETL workflows. Data Pipeline Architecture : - Architect, build, and maintain scalable and high-performance data pipelines to enable seamless data flow. - Evaluate and implement modern technologies to enhance the efficiency and reliability of data pipelines. - Build pipelines for extracting data via web scraping to source sector-specific datasets on an ad hoc basis. Data Modeling : - Design and implement data models to support analytics and reporting needs across teams. - Optimize database structures to enhance performance and scalability. Data Quality and Governance : - Develop and implement data quality checks and governance processes to ensure data integrity. - Collaborate with stakeholders to define and enforce data quality standards across the organization. Documentation and Communication : - Maintain detailed documentation of ETL processes, data models, and other key workflows. - Effectively communicate complex technical concepts to non-technical stakeholders and business users. Cross-Functional Collaboration : - Work closely with the Quant team and developers to design and optimize data pipelines. - Collaborate with external stakeholders to understand business requirements and translate them into technical solutions. Essential Requirements : Basic Qualifications : - Bachelor's degree in Computer Science, Information Technology, or a related field.Familiarity with big data technologies like Hadoop, Spark, and Kafka. - Experience with data modeling tools and techniques. - Excellent problem-solving, analytical, and communication skills. - Proven experience as a Data Engineer with expertise in ETL techniques (minimum years). - 3-6 years of strong programming experience in languages such as Python, Java, or Scala - Hands-on experience in web scraping to extract and transform data from publicly available web sources. - Proficiency with cloud-based data platforms such as AWS, Azure, or GCP. - Strong knowledge of SQL and experience with relational and non-relational databases. - Deep understanding of data warehousing concepts and architectures. Preferred Qualifications : - Master's degree in Computer Science or Data Science. - Knowledge of data streaming and real-time processing frameworks. - Familiarity with data governance and security best practices.
Posted 2 days ago
2.0 - 6.0 years
20 - 30 Lacs
Bengaluru
Work from Office
JOb Responsibilities: The Data Scientists will leverage expertise in advanced statistical and modelling techniques to design, prototype, and build the next-generation analytics engines and services. They will work closely with the analytics teams and business teams to derive actionable insights, helping the organization in achieving its’ strategic goals. Their work will involve high levels of interaction with the integrated analytics team, including data engineers, translators, and more senior data scientists. Has expertise in implementing complex statistical analyses for data processing, exploration, model building and implementation Lead teams of 2-3 associate data scientists in the use case building and delivery process Can communicate complex technical concepts to both technical and non-technical audience Plays a key role in driving ideation around the modelling process and developing models. Can conceptualize and drive re-iteration and fine tuning of models Contribute to knowledge building and sharing by researching best practices, documenting solutions, and continuously iterating on new ways to solve problems. Mentors junior team members to do the same REQUIRED EDUCATION AND EXPERIENCE: Master’s degree in Computer Science, Statistics, Math, Operations Research, Economics, or a related field Advanced level programming skills in at least 1 coding language (R/Python/Scala) Practical experience of developing advanced statistical and machine learning models At least 2 years of relevant analytics experience Experience in using large database systems preferred Has developed niche expertise in at least one functional domain REQUIRED SKILLS: Ability to work well in agile environments in diverse teams with multiple stakeholders Experience of leading small teams Able to problem solve complex problems and break them down into simpler parts Ability to effectively communicate complex analytical and technical content High energy and passionate individual who can work closely with other team members Strong entrepreneurial drive to test new out of the box techniques Able to prioritize workstreams and adopt an agile approach Willing to adopt an iterative approach; experimental mindset to drive innovation LOCATION: Bangalore What’s in it for you ? Disruptive projects : Work on ‘ breakthrough’ digital-and-analytics projects to enable UPL’s vision of building a future ready organization. It involves deploy ing solutions to help us increase our sales, sustain our profitability, improve our speed to market, supercharge our R&D efforts, and support the way we work internally. Help us ensure we have access to the best business insights that our data analysis can offer us. Cross functional leadership exposure : Work directly under guidance of functional leadership at UPL, on the most critical business problems for the organization (and the industry) today. It will give you exposure to a large cross-functional team (e.g.: spanning manufacturing, procurement, commercial, quality, IT/OT experts), allowing multi-functional learning in D&A deployment Environment fostering professional and personal development : Strengthen professional learning in a highly impact-oriented and meritocratic environment that is focused on delivering disproportionate business value through innovative solutions. It will be supported by on-the-job coaching from experienced domain experts, and continuous feedback from a highly motivated and capable set of peers. Comprehensive training programs for continuous development through UPL's D&A academy will help in accelerating growth opportunities. Come join us in this transformational journey! Let’s collectively Change the game with Digital & Analytics!
Posted 2 days ago
10.0 - 20.0 years
15 - 30 Lacs
Pune
Work from Office
Role & responsibilities Python architect At Cybage: 1. The central Architect pool to assist existing customer and providing technical solutions for new prospects / customers. 2. R&D on new technologies / frameworks / tools in Python ecosystem. 3. Providing Architectural Services for turn-key projects, new clients with large scale requirements. 4. Online travels on need basis for new customer engagement and during discovery phase to transfer knowledge. Python Architect - Web/API/Application Designing, building and maintaining, scalable and secure services and REST APIs, in at least one Python framework such as Django, Flask, FastAPI etc. or in Python with gRPC. Expertise in at least one one RDBMS such as Postgres, MySQL, Oracle etc. and one NoSQL database such as MongoDB, Redis, etc. Familiarity with different caching strategies and use of at least one caching solution such as Redis, Memcached, etc. to do the same. Designing for distributed / asynchronous jobs and familiarity with tools such as Celery, Redis Queue, Kafka, etc. to implement the same. Building these services in at least one cloud platform such as AWS, Azure, GCP etc. with an eye on scalability, performance and high availability. Experienced in building these use containerization ( Docker, etc. ) and orchestration ( Kubernetes, etc. ) techniques. Ensuring code quality and use of at least one tool / linter such as pylint, black, flake8 etc. Using automated tests to validate the services and APIs built in order to allow for continuous delivery. Effectively monitor and observe these services by tracking application logs through at least one logging tool such as Splunk, Datadog Logs, AWS Cloudwatch, etc. and application metrics such as latency, concurrency, etc. over at least one monitoring tool such as Datadog APM/RUM, AWS Cloudwatch, etc. Expertise in identifying the right thresholds and alerts on these services and plugging these against alerting tools such as PagerDuty, OpsGenie, AWS Cloudwatch alarms etc. in order to respond to incidents quickly and effectively. Python Architect - Data Designing, building and maintaining, effective and scalable data solutions using Python. Creating and maintaining data integration processes, ETL ( Extract, Transform, Load ) workflows, and data pipelines ( Airflow, etc. ) to seamlessly transport data between systems. Expertise in parallel processing massive datasets and use of Spark, Hadoop, etc. to do the same. Experienced in working with datasets hosted in at least one data warehouse such as Snowflake, Amazon Redshift, etc. Familiarity with reporting on datatsets using at least one BI tool such as Looker, Tableau, Power BI, Quicksight etc. Expertise in at least one one RDBMS such as Postgres, MySQL, Oracle etc. and one NoSQL database such as MongoDB, Redis, etc. Building these in at least one cloud platform such as AWS, Azure, GCP etc. with an eye on scalability, performance and high availability. Experienced in building these use containerization ( Docker, etc. ) and orchestration ( Kubernetes, etc. ) techniques. Ensuring code quality and use of at least one tool / linter such as pylint, black, flake8 etc. Using automated tests to validate the services built in order to allow for continuous delivery. Effectively monitor and observe these services by tracking service logs through at least one logging tool such as Splunk, Datadog Logs, AWS Cloudwatch, etc. and service metrics such as latency, concurrency, etc. over at least one monitoring tool such as Datadog APM, AWS Cloudwatch, etc. Expertise in identifying the right thresholds and alerts on these services and plugging these against alerting tools such as PagerDuty, OpsGenie, AWS Cloudwatch alarms etc. in order to respond to incidents quickly and effectively. Passion for maintaining software configurations in code and familiarity in the use of at least one of Ansible, Terraform, Helm, etc. to do the same. Company Profile: Founded in 1995, Cybage Software Pvt. Ltd is a technology consulting organization specialized in outsourced product engineering services. As a leader in the Technology and product engineering space, Cybage works with some of the worlds best independent software vendors. Our solutions are focused on modern technologies and are enabled by a scientific, data driven system called Decision Mines for Digital Excellence. This unique model de-risks our approach, provides better predictability, and ensures a better value per unit cost to our clients. An ISO 27001 certified company based in Pune, India that is partnered with more than 200 global software houses of fine repute. The array of services includes Product Engineering (OPD), Enterprise Business Solutions, Value Added Services and Idea Incubation Services. Cybage specializes in the implementation of the Offshore Development Center (ODC) model. You will get an opportunity to be a part of highly skilled talent pool of more than 7500 employees. Apart from Pune, we have our operations hub in GNR and Hyderabad as well and we have also marked our presence in North America, Canada, UK, Europe, Japan, Australia, and Singapore. We provide seamless services and dependable deliveries to our clients from diverse industry verticals such as Media and Advertising, Travel and Hospitality, Digital Retail, Healthcare, SCM and HI-Tech. For more information, log on to www.cybage.com
Posted 2 days ago
1.0 - 5.0 years
15 - 30 Lacs
Bengaluru
Work from Office
JOb Responsibilities: The Data Scientists will leverage expertise in advanced statistical and modelling techniques to design, prototype, and build the next-generation analytics engines and services. They will work closely with the analytics teams and business teams to derive actionable insights, helping the organization in achieving its’ strategic goals. Their work will involve high levels of interaction with the integrated analytics team, including data engineers, translators, and more senior data scientists. Has expertise in implementing complex statistical analyses for data processing, exploration, model building and implementation Lead teams of 2-3 associate data scientists in the use case building and delivery process Can communicate complex technical concepts to both technical and non-technical audience Plays a key role in driving ideation around the modelling process and developing models. Can conceptualize and drive re-iteration and fine tuning of models Contribute to knowledge building and sharing by researching best practices, documenting solutions, and continuously iterating on new ways to solve problems. Mentors junior team members to do the same REQUIRED EDUCATION AND EXPERIENCE: Master’s degree in Computer Science, Statistics, Math, Operations Research, Economics, or a related field Advanced level programming skills in at least 1 coding language (R/Python/Scala) Practical experience of developing advanced statistical and machine learning models At least 2 years of relevant analytics experience Experience in using large database systems preferred Has developed niche expertise in at least one functional domain REQUIRED SKILLS: Ability to work well in agile environments in diverse teams with multiple stakeholders Experience of leading small teams Able to problem solve complex problems and break them down into simpler parts Ability to effectively communicate complex analytical and technical content High energy and passionate individual who can work closely with other team members Strong entrepreneurial drive to test new out of the box techniques Able to prioritize workstreams and adopt an agile approach Willing to adopt an iterative approach; experimental mindset to drive innovation
Posted 2 days ago
6.0 - 11.0 years
10 - 20 Lacs
Pune, Gurugram, Bengaluru
Work from Office
Job Location: Pune, Bangalore, or Gurugram Available to join immediately or within a notice period of up to 30 days. MANDATE SKILLS -Python, Spark, Airflow, SQL, Snowflake Over 5 years of overall experience in the data engineering and analytics industry. 3+ years of hands-on experience with Python, Apache Spark, and Apache Airflow for building scalable data pipelines and ETL workflows. Proficient in SQL with strong knowledge of data querying and transformation; experience with Snowflake is a plus. Solid experience working with both relational (e.g., PostgreSQL, MySQL) and non-relational databases (e.g., MongoDB, Cassandra). Strong understanding of data modeling principles and the design of both batch and real-time data pipelines. Proven track record of developing robust, scalable solutions in cloud environments such as AWS, Azure, or GCP. Well-versed in DevOps practices including CI/CD, infrastructure as code, and containerization. Experienced in Agile development methodologies with active participation in sprint planning, standups, and retrospectives. For more information, please share your updated CV at admin@spearheadps.com or contact me via call/WhatsApp at 9899080360
Posted 2 days ago
4.0 - 6.0 years
0 - 2 Lacs
Pune
Hybrid
Must Have Skills: 4+ years of experience in data engineering, with a focus on big data technologies (e.g., Spark, Kafka) 2+ years of Databricks experience is must Strong understanding of data architecture, ETL processes, and data warehousing Proficiency in programming languages such as Python or Java Experience with cloud platforms (e.g., AWS, Azure, GCP) and big data tools Excellent communication, interpersonal, and leadership skills Ability to work in a fast-paced environment and manage multiple priorities Preferred candidate profile Solid written, verbal, and presentation communication skills Strong team and individual player Maintains composure during all types of situations and is collaborative by nature High standards of professionalism, consistently producing high-quality results Self-sufficient, independent requiring very little supervision or intervention Demonstrate flexibility and openness to bring creative solutions to address issues
Posted 2 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
19947 Jobs | Dublin
Wipro
9475 Jobs | Bengaluru
EY
7894 Jobs | London
Accenture in India
6317 Jobs | Dublin 2
Amazon
6141 Jobs | Seattle,WA
Uplers
6077 Jobs | Ahmedabad
Oracle
5820 Jobs | Redwood City
IBM
5736 Jobs | Armonk
Tata Consultancy Services
3644 Jobs | Thane
Capgemini
3598 Jobs | Paris,France