Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
6.0 - 11.0 years
8 - 13 Lacs
Bengaluru
Work from Office
Develop, test and support future-ready data solutions for customers across industry verticals. Develop, test, and support end-to-end batch and near real-time data flows/pipelines. Demonstrate understanding of data architectures, modern data platforms, big data, analytics, cloud platforms, data governance and information management and associated technologies. Communicates risks and ensures understanding of these risks. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Graduate with a minimum of 6+ years of related experience required. Experience in modelling and business system designs. Good hands-on experience on DataStage, Cloud-based ETL Services. Have great expertise in writing TSQL code. Well-versed with data warehouse schemas and OLAP techniques. Preferred technical and professional experience Ability to manage and make decisions about competing priorities and resources. Ability to delegate where appropriate. Must be a strong team player/leader. Ability to lead Data transformation projects with multiple junior data engineers. Strong oral written and interpersonal skills for interacting throughout all levels of the organization. Ability to communicate complex business problems and technical solutions.
Posted 1 month ago
2.0 - 6.0 years
7 - 11 Lacs
Bengaluru
Work from Office
As a Software Developer you'll participate in many aspects of the software development lifecycle, such as design, code implementation, testing, and support. You will create software that enables your clients' hybrid-cloud and AI journeys. Your primary responsibilities includeComprehensive Feature Development and Issue ResolutionWorking on the end to end feature development and solving challenges faced in the implementation. Stakeholder Collaboration and Issue ResolutionCollaborate with key stakeholders, internal and external, to understand the problems, issues with the product and features and solve the issues as per SLAs defined. Continuous Learning and Technology IntegrationBeing eager to learn new technologies and implementing the same in feature development. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise GCP using pub sub, big query, dataflow, cloud workflow/cloud scheduler, cloud run, data proc, Cloud Function Cloud data engineers with GCP PDE certification and working experience with GCP. Building end to end data pipelines in GCP using pub sub, big query, dataflow, cloud workflow/cloud scheduler, cloud run, data proc, Cloud Function Experience in logging and monitoring of GCP services and Experience in Terraform and infrastructure automation. Expertise in Python coding language Develops data engineering solutions on Google Cloud ecosystem and supports and maintains data engineering solutions on Google Cloud ecosystem Preferred technical and professional experience Stay updated with the latest trends and advancements in cloud technologies, frameworks, and tools. Conduct code reviews and provide constructive feedback to maintain code quality and ensure adherence to best practices. Troubleshoot and debug issues and deploy applications to the cloud platform
Posted 1 month ago
3.0 - 7.0 years
12 - 16 Lacs
Bengaluru
Work from Office
As an Data Engineer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets. In this role, your responsibilities may include Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise search applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviour’s. Build teams or writing programs to cleanse and integrate data in an efficient and reusable manner, developing predictive or prescriptive models, and evaluating modelling results Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Proof of Concept (POC) DevelopmentDevelop POCs to validate and showcase the feasibility and effectiveness of the proposed AI solutions. Help in showcasing the ability of Gen AI code assistant to refactor/rewrite and document code from one language to another Document solution architectures, design decisions, implementation details, and lessons learned. Stay up to date with the latest trends and advancements in AI, foundation models, and large language models. Evaluate emerging technologies, tools, and frameworks to assess their potential impact on solution design and implementation Preferred technical and professional experience Experience and working knowledge in COBOL & JAVA would be preferred Having experience in Code generation, code matching & code translation leveraging LLM capabilities would be a Big plus Demonstrate a growth mindset to understand clients' business processes and challenges
Posted 1 month ago
2.0 - 5.0 years
13 - 17 Lacs
Gurugram
Work from Office
As an Associate Software Developer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets. In this role, your responsibilities may include: Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise search applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviours. Build teams or writing programs to cleanse and integrate data in an efficient and reusablemanner, developing predictive or prescriptive models, and evaluating modelling results Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Develop/Convert the database (Hadoop to GCP) of the specific objects (tables, views, procedures, functions, triggers, etc.) from one database to another database platform Implementation of a specific Data Replication mechanism (CDC, file data transfer, bulk data transfer, etc.). Expose data as API Participation in modernization roadmap journey Analyze discovery and analysis outcomes Lead discovery and analysis workshops/playbacks Identification of the applications dependencies, source, and target database incompatibilities. Analyze the non-functional requirements (security, HA, RTO/RPO, storage, compute, network, performance bench, etc.). Prepare the effort estimates, WBS, staffing plan, RACI, RAID etc. . Leads the team to adopt right tools for various migration and modernization method Preferred technical and professional experience You thrive on teamwork and have excellent verbal and written communication skills. Ability to communicate with internal and external clients to understand and define business needs, providing analytical solutions Ability to communicate results to technical and non-technical audiences
Posted 1 month ago
2.0 - 5.0 years
4 - 8 Lacs
Bengaluru
Work from Office
The ability to be a team player The ability and skill to train other people in procedural and technical topics Strong communication and collaboration skills Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Able to write complex SQL queries ; Having experience in Azure Databricks Preferred technical and professional experience Excellent communication and stakeholder management skills
Posted 1 month ago
5.0 - 8.0 years
7 - 10 Lacs
Mumbai
Work from Office
As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and Azure Cloud Data Platform Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark and Hive, Hbase or other NoSQL databases on Azure Cloud Data Platform or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / Azure eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Total 5-8 years of experience in Data Management (DW, DL, Data Platform, Lakehouse) and Data Engineering skills Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala; Minimum 3 years of experience on Cloud Data Platforms on Azure; Experience in DataBricks / Azure HDInsight / Azure Data Factory, Synapse, SQL Server DB Good to excellent SQL skills Preferred technical and professional experience Certification in Azure and Data Bricks or Cloudera Spark Certified developers Experience in DataBricks / Azure HDInsight / Azure Data Factory, Synapse, SQL Server DB Knowledge or experience of Snowflake will be an added advantage
Posted 1 month ago
3.0 - 7.0 years
14 - 18 Lacs
Bengaluru
Work from Office
As an Associate Data Scientist at IBM, you will work to solve business problems using leading edge and open-source tools such as Python, R, and TensorFlow, combined with IBM tools and our AI application suites. You will prepare, analyze, and understand data to deliver insight, predict emerging trends, and provide recommendations to stakeholders. In your role, you may be responsible for: Implementing and validating predictive and prescriptive models and creating and maintaining statistical models with a focus on big data & incorporating machine learning. techniques in your projects Writing programs to cleanse and integrate data in an efficient and reusable manner Working in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviors Communicating with internal and external clients to understand and define business needs and appropriate modelling techniques to provide analytical solutions. Evaluating modelling results and communicating the results to technical and non-technical audiences. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Proof of Concept (POC) DevelopmentDevelop POCs to validate and showcase the feasibility and effectiveness of the proposed AI solutions. Collaborate with development teams to implement and iterate on POCs, ensuring alignment with customer requirements and expectations. Help in showcasing the ability of Gen AI code assistant to refactor/rewrite and document code from one language to another, particularly COBOL to JAVA through rapid prototypes/ PoC Document solution architectures, design decisions, implementation details, and lessons learned. Create technical documentation, white papers, and best practice guides Preferred technical and professional experience Strong programming skills, with proficiency in Python and experience with AI frameworks such as TensorFlow, PyTorch, Keras or Hugging Face. Understanding in the usage of libraries such as SciKit Learn, Pandas, Matplotlib, etc. Familiarity with cloud platforms Experience and working knowledge in COBOL & JAVA would be preferred Experience in python and pyspark will be added advantage
Posted 1 month ago
2.0 - 5.0 years
4 - 8 Lacs
Bengaluru
Work from Office
As an Data Engineer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets. In this role, your responsibilities may include Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise search applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviour’s. Build teams or writing programs to cleanse and integrate data in an efficient and reusable manner, developing predictive or prescriptive models, and evaluating modelling results Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Good Hands-on experience in DBT is required. ETL DataStage and snowflake - preferred. Ability to use programming languages like Java, Python, Scala, etc., to build pipelines to extract and transform data from a repository to a data consumer Ability to use Extract, Transform, and Load (ETL) tools and/or data integration, or federation tools to prepare and transform data as needed. Ability to use leading edge tools such as Linux, SQL, Python, Spark, Hadoop and Java Preferred technical and professional experience You thrive on teamwork and have excellent verbal and written communication skills. Ability to communicate with internal and external clients to understand and define business needs, providing analytical solutions Ability to communicate results to technical and non-technical audiences
Posted 1 month ago
2.0 - 5.0 years
4 - 8 Lacs
Kolkata
Work from Office
The ability to be a team player The ability and skill to train other people in procedural and technical topics Strong communication and collaboration skills Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Able to write complex SQL queries ; Having experience in Azure Databricks Preferred technical and professional experience Excellent communication and stakeholder management skills
Posted 1 month ago
2.0 - 5.0 years
13 - 17 Lacs
Bengaluru
Work from Office
As an Associate Software Developer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets. In this role, your responsibilities may include: Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise seach applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviors. Build teams or writing programs to cleanse and integrate data in an efficient and reusable manner, developing predictive or prescriptive models, and evaluating modeling results. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Develop/Convert the database (Hadoop to GCP) of the specific objects (tables, views, procedures, functions, triggers, etc.) from one database to another database platform Implementation of a specific Data Replication mechanism (CDC, file data transfer, bulk data transfer, etc.). Expose data as API Participation in modernization roadmap journey Analyze discovery and analysis outcomes Lead discovery and analysis workshops/playbacks Identification of the applications dependencies, source, and target database incompatibilities. Analyze the non-functional requirements (security, HA, RTO/RPO, storage, compute, network, performance bench, etc.). Prepare the effort estimates, WBS, staffing plan, RACI, RAID etc. . Leads the team to adopt right tools for various migration and modernization method Preferred technical and professional experience You thrive on teamwork and have excellent verbal and written communication skills. Ability to communicate with internal and external clients to understand and define business needs, providing analytical solutions Ability to communicate results to technical and non-technical audiences
Posted 1 month ago
4.0 - 9.0 years
6 - 11 Lacs
Kochi
Work from Office
As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and AWS Cloud Data Platform Responsibilities: Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark, Scala, and Hive, Hbase or other NoSQL databases on Cloud Data Platforms (AWS) or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / AWS eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala ; Minimum 3 years of experience on Cloud Data Platforms on AWS; Experience in AWS EMR / AWS Glue / Data Bricks, AWS RedShift, DynamoDB Good to excellent SQL skills Exposure to streaming solutions and message brokers like Kafka technologies Preferred technical and professional experience Certification in AWS and Data Bricks or Cloudera Spark Certified developers
Posted 1 month ago
2.0 - 5.0 years
13 - 17 Lacs
Chennai
Work from Office
As an Associate Software Developer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets. In this role, your responsibilities may include: Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise search applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviour’s. Build teams or writing programs to cleanse and integrate data in an efficient and reusable manner, developing predictive or prescriptive models, and evaluating modelling results Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Develop/Convert the database (Hadoop to GCP) of the specific objects (tables, views, procedures, functions, triggers, etc.) from one database to another database platform Implementation of a specific Data Replication mechanism (CDC, file data transfer, bulk data transfer, etc.). Expose data as API Participation in modernization roadmap journey Analyze discovery and analysis outcomes Lead discovery and analysis workshops/playbacks Identification of the applications dependencies, source, and target database incompatibilities. Analyze the non-functional requirements (security, HA, RTO/RPO, storage, compute, network, performance bench, etc.). Prepare the effort estimates, WBS, staffing plan, RACI, RAID etc. . Leads the team to adopt right tools for various migration and modernization method Preferred technical and professional experience You thrive on teamwork and have excellent verbal and written communication skills. Ability to communicate with internal and external clients to understand and define business needs, providing analytical solutions Ability to communicate results to technical and non-technical audiences
Posted 1 month ago
3.0 - 6.0 years
10 - 14 Lacs
Mumbai
Work from Office
As Data Engineer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You’ll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you’ll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you’ll tackle obstacles related to database integration and untangle complex, unstructured data sets. In this role, your responsibilities may include: Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise search applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviours. Build teams or writing programs to cleanse and integrate data in an efficient and reusable manner, developing predictive or prescriptive models, and evaluating modelling results Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Experience in the integration efforts between Alation and Manta, ensuring seamless data flow and compatibility. Collaborate with cross-functional teams to gather requirements and design solutions that leverage both Alation and Manta platforms effectively. Develop and maintain data governance processes and standards within Alation, leveraging Manta's data lineage capabilities. Analyze data lineage and metadata to provide insights into data quality, compliance, and usage patterns Preferred technical and professional experience Lead the evaluation and implementation of new features and updates for both Alation and Manta platforms Ensuring alignment with organizational goals and objectives. Drive continuous improvement initiatives to enhance the efficiency and effectiveness of data management processes, leveraging Alati
Posted 1 month ago
5.0 - 10.0 years
22 - 27 Lacs
Bengaluru
Work from Office
Create Solution Outline and Macro Design to describe end to end product implementation in Data Platforms including, System integration, Data ingestion, Data processing, Serving layer, Design Patterns, Platform Architecture Principles for Data platform Contribute to pre-sales, sales support through RfP responses, Solution Architecture, Planning and Estimation Contribute to reusable components / asset / accelerator development to support capability development Participate in Customer presentations as Platform Architects / Subject Matter Experts on Big Data, Azure Cloud and related technologies Participate in customer PoCs to deliver the outcomes Participate in delivery reviews / product reviews, quality assurance and work as design authority Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Experience in designing of data products providing descriptive, prescriptive, and predictive analytics to end users or other systems Experience in data engineering and architecting data platforms Experience in architecting and implementing Data Platforms Azure Cloud Platform Experience on Azure cloud is mandatory (ADLS Gen 1 / Gen2, Data Factory, Databricks, Synapse Analytics, Azure SQL, Cosmos DB, Event hub, Snowflake), Azure Purview, Microsoft Fabric, Kubernetes, Terraform, Airflow Experience in Big Data stack (Hadoop ecosystem Hive, HBase, Kafka, Spark, Scala PySpark, Python etc.) with Cloudera or Hortonworks Preferred technical and professional experience Experience in architecting complex data platforms on Azure Cloud Platform and On-Prem Experience and exposure to implementation of Data Fabric and Data Mesh concepts and solutions like Microsoft Fabric or Starburst or Denodo or IBM Data Virtualisation or Talend or Tibco Data Fabric Exposure to Data Cataloging and Governance solutions like Collibra, Alation, Watson Knowledge Catalog, dataBricks unity Catalog, Apache Atlas, Snowflake Data Glossary etc
Posted 1 month ago
2.0 - 5.0 years
8 - 12 Lacs
Navi Mumbai
Work from Office
Experience is creating templates in Open Text Extreme CE 23.4 Version Responsible to design and develop different documents and business forms using OpenText Exstream. Understanding of different input, output file formats, and Print file formats(PDF etc). Perform unit testing of templates/documents. Apply styles and images to document design. Use output comparison tools to compare different outputs Should have experience working with Exstream Design Manager & Exstream Designer Tool. Should have prior knowledge on working with Exstream Web Service. Designing Templates, Objects, Rules, Variables and creation of Documents based on Templates. Understand current SmartCOMM Templates and create templates based on that. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Intuitive individual with an ability to manage change and proven time management Proven interpersonal skills while contributing to team effort by accomplishing related results as needed Up-to-date technical knowledge by attending educational workshops, reviewing publications Preferred technical and professional experience Experience is creating templates in Open Text Extreme CE 23.4 Version Responsible to design and develop different documents and business forms using OpenText Exstream. Understanding of different input, output file formats, and Print file formats(PDF etc). Perform unit testing of templates/documents.
Posted 1 month ago
5.0 - 7.0 years
7 - 9 Lacs
Gurugram
Work from Office
A career in IBM Consulting is rooted by long-term relationships and close collaboration with clients across the globe. You'll work with visionaries across multiple industries to improve the hybrid cloud and Al journey for the most innovative and valuable companies in the world. Your ability to accelerate impact and make meaningful change for your clients is enabled by our strategic partner ecosystem and our robust technology platforms across the IBM portfolio; including Software and Red Hat In your role, you will be responsible for Skilled Multiple GCP services - GCS, Big Query, Cloud SQL, Dataflow, Pub/Sub, Cloud Run, Workflow, Composer, Error reporting, Log explorer etc. Must have Python and SQL work experience & Proactive, collaborative and ability to respond to critical situation Ability to analyse data for functional business requirements & front face customer Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise 5 to 7 years of relevant experience working as technical analyst with Big Query on GCP platform. Skilled in multiple GCP services - GCS, Cloud SQL, Dataflow, Pub/Sub, Cloud Run, Workflow, Composer, Error reporting, Log explorer Ambitious individual who can work under their own direction towards agreed targets/goals and with creative approach to work You love collaborative environments that use agile methodologies to encourage creative design thinking and find innovative ways to develop with cutting edge technologies. End to End functional knowledge of the data pipeline/transformation implementation that the candidate has done, should understand the purpose/KPIs for which data transformation was done Preferred technical and professional experience Experience with AEM Core TechnologiesOSGI Services, Apache Sling ,Granite Framework., Java Content Repository API, Java 8+, Localization Familiarity with building tools, Jenkin and Maven , Knowledge of version control tools, especially Git, Knowledge of Patterns and Good Practices to design and develop quality and clean code, Knowledge of HTML, CSS, and JavaScript , jQuery Familiarity with task management, bug tracking, and collaboration tools like JIRA and Confluence
Posted 1 month ago
2.0 - 5.0 years
6 - 10 Lacs
Mumbai
Work from Office
As Data Engineer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets. In this role, your responsibilities may include: Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise search applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviours. Build teams or writing programs to cleanse and integrate data in an efficient and reusable manner, developing predictive or prescriptive models, and evaluating modelling results Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Expertise in Data warehousing/ information Management/ Data Integration/Business Intelligence using ETL tool Informatica PowerCenter Knowledge of Cloud, Power BI, Data migration on cloud skills. Experience in Unix shell scripting and python Experience with relational SQL, Big Data etc Preferred technical and professional experience Knowledge of MS-Azure Cloud Experience in Informatica PowerCenter Experience in Unix shell scripting and python
Posted 1 month ago
5.0 - 8.0 years
7 - 10 Lacs
Pune
Work from Office
As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and Azure Cloud Data Platform Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark and Hive, Hbase or other NoSQL databases on Azure Cloud Data Platform or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / Azure eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Total 5-8 years of experience in Data Management (DW, DL, Data Platform, Lakehouse) and Data Engineering skills Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala; Minimum 3 years of experience on Cloud Data Platforms on Azure; Experience in Data Bricks / Azure HDInsight / Azure Data Factory, Synapse, SQL Server DB Good to excellent SQL skills Preferred technical and professional experience Certification in Azure and Data Bricks or Cloudera Spark Certified developers Experience in DataBricks / Azure HDInsight / Azure Data Factory, Synapse, SQL Server DB Knowledge or experience of Snowflake will be an added advantage
Posted 1 month ago
5.0 - 10.0 years
7 - 12 Lacs
Mumbai
Work from Office
As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs. Your primary responsibilities include: Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements. Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization. Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Must have 5+ years exp in Big Data -Hadoop Spark -Scala, Python HBase, Hive Good to have Aws -S3, Athena, Dynamo DB, Lambda, Jenkins GIT Developed Python and pyspark programs for data analysis. Good working experience with python to develop Custom Framework for generating of rules (just like rules engine). Developed Python code to gather the data from HBase and designs the solution to implement using Pyspark. Apache Spark Data Frames/RDD's were used to apply business transformations and utilized Hive Context objects to perform read/write operations Preferred technical and professional experience Understanding of Devops. Experience in building scalable end-to-end data ingestion and processing solutions Experience with object-oriented and/or functional programming languages, such as Python, Java and Scala
Posted 1 month ago
4.0 - 8.0 years
6 - 10 Lacs
Mumbai
Work from Office
Role Overview: Lead the architectural design and implementation of a secure, scalable Cloudera-based Data Lakehouse for one of India’s top public sector banks. Key Responsibilities: * Design end-to-end Lakehouse architecture on Cloudera * Define data ingestion, processing, storage, and consumption layers * Guide data modeling, governance, lineage, and security best practices * Define migration roadmap from existing DWH to CDP * Lead reviews with client stakeholders and engineering teams Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Proven experience with Cloudera CDP, Spark, Hive, HDFS, Iceberg * Deep understanding of Lakehouse patterns and data mesh principles * Familiarity with data governance tools (e.g., Apache Atlas, Collibra) * Banking/FSI domain knowledge highly desirable.
Posted 1 month ago
8.0 - 12.0 years
35 - 45 Lacs
Pune
Work from Office
Role Overview** We are seeking an experienced Big Data Cloud Architect with 812 years in designing and implementing scalable big data solutions on Azure and AWS. The ideal candidate will have deep hands-on expertise in cloud-based big data technologies, a strong background in the software development life cycle (SDLC), and experience in microservices and backend development. --- **Key Responsibilities** - Design, build, and maintain scalable big data architectures on Azure and AWS - Select and integrate big data tools and frameworks (e.g., Hadoop, Spark, Kafka, Azure Data Factory, ) - Lead data migration from legacy systems to cloud-based solutions - Develop and optimize ETL pipelines and data processing workflows. - Ensure data infrastructure meets performance, scalability, and security requirements. - Collaborate with development teams to implement microservices and backend solutions for big data applications. - Oversee the end-to-end SDLC for big data projects, from planning to deployment. - Mentor junior engineers and contribute to architectural best practices. - Prepare architecture documentation and technical reports. --- **Required Skills & Qualifications** - Bachelor’s/Master’s degree in Computer Science, Engineering, or related field. - 8–12 years of experience in big data and cloud architecture. - Proven hands-on expertise with Azure and AWS big data services (e.g., Azure Synapse, AWS Redshift, S3, Glue, Data Factory). - Strong programming skills in Python, Java, or Scala[9]. - Solid understanding of SDLC and agile methodologies. - Experience in designing and deploying microservices, preferably for backend data systems. - Knowledge of data storage, database management (relational and NoSQL), and data security best practices[6]. - Excellent problem-solving, communication, and team leadership skills[8]. --- **Preferred Qualifications** - Certifications in AWS and/or Azure cloud platforms. - Experience with Infrastructure as Code (e.g., Terraform, CloudFormation). - Exposure to containerization (Docker, Kubernetes) and CI/CD pipelines. - Familiarity with data analytics, machine learning, and event-driven architectures.
Posted 1 month ago
10.0 - 15.0 years
7 - 11 Lacs
Bengaluru
Work from Office
At Curriculum Associates (CA), we believe a diverse team leads to diversity in thinking, making our products better for teachers and students. If you read this job description, feel energized by what you see here, and believe you could bring passion and commitment to the role, but you aren t sure you meet every qualification, please apply! Above all, we are looking for the right person! Summary: Join a dynamic and innovative educational technology organization and play a pivotal role in developing impactful software solutions . We are seeking a Senior Software Engineer with robust experience in Scala, S park, database systems, and Big Data technologies. This position emphasizes both individual technical contributions and collaborative efforts within an Agile environment to deliver scalable and efficient solutions that address complex bu sin ess needs. Essential duties/responsibilities: Lead technical initiatives and contribute as a senior team member to achieve project goals and deadlines. Collaborate with team members to design, implement, and optimize software solutions aligned with organizational objectives . Build scalable, efficient, and high-performance pipelines and workflows for proces sin g large amounts of batch and real-time data. Perform multidisciplinary work, supporting real-time streams, ETL pipelines, data warehouses, and reporting services. Recommend and advocate for technology upgrades to company leaders to ensure infrastructure remains robust and competitive. Design and develop microservices and data applications while ensuring seamless integration with other systems. Leverage Big Data technologies like Kafka, AWS S3, EMR, and Spark to handle data ingestion, transformation, and querying. Follow coding best practices, including unit testing, code reviews, code coverage, and maintaining comprehensive documentation. Conduct thorough code reviews to maintain quality, mentor junior team members, and promote continuous learning within the team. Enhance system performance through analysis and capacity planning, ensuring efficient and reliable software releases. Actively bring new and innovative solutions to address challenging software issues that arise throughout the product lifecycle. Implement and promote security protocols and data governance standards across development projects. Actively engage in Agile processes to foster collaboration and innovation within the team. Required job skills: Strong software design capabilities with a deep understanding of design patterns and performance optimizations. Proficiency in writing high-quality, well-structured code in Java and Scala. Expertise in SQL and relational databases, with advanced skills in writing efficient, complex queries and optimizing database performance. Expertise in cloud computing infrastructure, particularly AWS (Aurora MySQL, DynamoDB, EMR, Lambda, etc.). Solid experience with Big Data tools such as Apache Spark and Kafka. Ability to clearly document and communicate technical solutions to diverse audiences. Experience mentoring and conducting constructive code reviews to support team development. Familiarity with Agile methodologies and modern development tools. Minimum qualifications: 1 0 + years experience in designing and developing enterprise level software solutions 3 years experience developing Scala / Java applications and microservices u sin g Spring Boot 7 years experience with large volume data proces sin g and big data tools such as Apache Spark, SQL, Scala, and Hadoop technologies 5 years experience with SQL and Relational databases 2 year Experience working with the Agile/Scrum methodology Preferred qualifications: Educational domain background
Posted 1 month ago
7.0 - 12.0 years
6 - 10 Lacs
Mumbai
Work from Office
SLSQ126R458 As an Enterprise Account Executive at Databricks, you are a sales professional experienced in leading go-to-market campaigns in a few of the largest Indian conglomerates. You know how to sell innovation and change through customer vision expansion and can guide deals forward to compress decision cycles. You love understanding a product in depth and are passionate about communicating value to Customers and System Integrators. Databricks operates at the cutting edge of the Unified Data Analytics and AI space. Our customers turn to us to lead the accelerated innovation that their businesses need to gain a first-mover advantage in today s ultra-competitive landscape. As we continue our rapid expansion, we are looking for a creative, execution-oriented Enterprise Account Executive to join us and maximize the phenomenal market opportunity that exists for Databricks. Reporting to our Director of Enterprise Sales, you will manage a strategic enterprise vertical. Your informed perspective on Big Data, Advanced Analytics, and AI will help guide your successful execution strategy and allow you to provide genuine value to the client. The impact you will have: Present a territory plan within the first 90 days Meet with CIOs, IT executives, LOB executives, Program Managers, and other important partners Close both new accounts and existing accounts Identify and close quick, small wins while managing longer, complex sales cycles Exceed activity, pipeline, and revenue targets Track all customer details, including use case, purchase time frames, next steps, and forecasting in Salesforce Use a solution-based approach to selling and creating value for customers Promote Databricks enterprise cloud data platform powered by Apache Spark Ensure 100% satisfaction among all customers Prioritize opportunities and apply appropriate resources Build a plan for success internally at Databricks and externally with your accounts. What we look for: Previous field sales experience within big data, Cloud, SaaS, and a consumption selling motion Prior customer relationships with CIOs, program managers, and essential decision makers at local accounts The ability to simplify a technical capability into a value-based benefit 7+ years of Enterprise Sales experience exceeding quotas in larger accounts (preferably with Indian conglomerate like Reliance.) Managing a small set of enterprise accounts rather than a broad territory Bachelors Degree About Databricks Databricks is the data and AI company. More than 10,000 organizations worldwide including Comcast, Cond Nast, Grammarly, and over 50% of the Fortune 500 rely on the Databricks Data Intelligence Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark , Delta Lake and MLflow. To learn more, follow Databricks on Twitter , LinkedIn and Facebook . Benefits At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region, please visit https: / / www.mybenefitsnow.com / databricks . Our Commitment to Diversity and Inclusion . Compliance If access to export-controlled technology or source code is required for performance of job duties, it is within Employers discretion whether to apply for a U.S. government license for such positions, and Employer may decline to proceed with an applicant on this basis alone.
Posted 1 month ago
7.0 - 12.0 years
10 - 14 Lacs
Mumbai
Work from Office
SLSQ126R545 As an Enterprise Account Executive at Databricks, you are a sales professional experienced in leading go-to-market campaigns in one of the largest Banking institutions in India. You know how to sell innovation and change through customer vision expansion and can guide deals forward to compress decision cycles. You love understanding a product in depth and are passionate about communicating value to Customers and System Integrators. Databricks operates at the leading edge of the Unified Data Analytics and AI space. Our customers turn to us to lead the accelerated innovation that their businesses need to gain a first mover advantage in today s ultra-competitive landscape. As we continue our rapid expansion, we are looking for a creative, execution-oriented Enterprise Account Executive to join the Retail & CPG team and maximize the phenomenal market opportunity that exists for Databricks. Reporting to our Director of Enterprise Sales, you will manage a strategic enterprise client in the BFSI vertical. Your informed perspective on Big Data, Advanced Analytics, and AI will help guide your successful execution strategy and allow you to provide genuine value to the client. The impact you will have: Present a territory plan within the first 90 days Meet with CIOs, IT executives, LOB executives, Program Managers, and other important partners Close both new accounts and existing accounts Identify and close quick, small wins while managing longer, complex sales cycles Exceed activity, pipeline, and revenue targets Track all customer details, including use case, purchase time frames, next steps, and forecasting in Salesforce Use a solution-based approach to selling and creating value for customers Promote Databricks enterprise cloud data platform powered by Apache Spark Ensure 100% satisfaction among all customers Prioritize opportunities and apply appropriate resources Build a plan for success internally at Databricks and externally with your accounts What we look for: Previous field sales experience within big data, Cloud, SaaS, and a consumption selling motion Prior customer relationships with CIOs, program managers, and essential decision makers at local accounts The ability to simplify a technical capability into a value-based benefit 7+ years of Enterprise Sales experience exceeding quotas in BFSI accounts like ICICI Bachelors Degree About Databricks Databricks is the data and AI company. More than 10,000 organizations worldwide including Comcast, Cond Nast, Grammarly, and over 50% of the Fortune 500 rely on the Databricks Data Intelligence Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark , Delta Lake and MLflow. To learn more, follow Databricks on Twitter , LinkedIn and Facebook . Benefits At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region, please visit https: / / www.mybenefitsnow.com / databricks . Our Commitment to Diversity and Inclusion . Compliance If access to export-controlled technology or source code is required for performance of job duties, it is within Employers discretion whether to apply for a U.S. government license for such positions, and Employer may decline to proceed with an applicant on this basis alone.
Posted 1 month ago
15.0 - 20.0 years
11 - 15 Lacs
Bengaluru
Work from Office
At Curriculum Associates (CA), we believe a diverse team leads to diversity in thinking, making our products better for teachers and students. If you read this job description, feel energized by what you see here, and believe you could bring passion and commitment to the role, but you aren t sure you meet every qualification, please apply! Above all, we are looking for the right person! Summary: Join our innovative educational technology organization as a Principal Software Engineer. Leverage your expertise in Scala, Spark, databases, and Big Data to architect and deliver scalable, impactful software solutions. In this role, youll lead solution engineering efforts, drive new platform and product developments, analyze and enhance system architecture, and collaborate with product managers to plan and execute smooth feature rollouts within an Agile environment. Essential duties/responsibilities: Lead technical initiatives and guide the team to develop innovative software solutions that address complex challenges. Build scalable, efficient, and high-performance pipelines and workflows for processing large volumes of batch and real-time data. Maintain and enhance existing software systems to ensure performance and reliability. Recommend and implement technology upgrades to drive continuous improvement. Support real-time streams, ETL pipelines, data warehouses, and reporting services. Design and develop data frameworks, applications, and microservices that seamlessly integrate with other services. Utilize Big Data tools such as Kafka, AWS S3 Data Lake, EMR, and Spark to ingest, store, transform, and query data. Adhere to coding best practices, including unit testing, design/code reviews, and comprehensive documentation. Conduct thorough code reviews to maintain quality, mentor junior team members, and promote continuous learning. Perform performance analyses and capacity planning for each release. Work effectively as part of an Agile team, contributing to process improvements and innovative solutions. Implement and promote security protocols and data governance standards across development projects. Proactively introduce new approaches to overcome software challenges throughout the product lifecycle. Required job skills: Strong software design skills with a deep understanding of design patterns and performance optimization. Expertise in writing high-quality, well-structured Scala code with an emphasis on functional programming and test-driven development. Ability to produce clear, concise, and organized documentation. Knowledge of Amazon cloud computing services (Aurora MySQL, DynamoDB, EMR, Lambda, Step Functions, and S3). Excellent communication skills and the ability to collaborate effectively with team members of varying technical backgrounds. Proficiency in conducting detailed code reviews focused on improving code quality and mentoring developers. Familiarity with software engineering and project management tools. Commitment to following security protocols and best practices in data governance. Capability to construct KPIs and use metrics for continuous process improvement Minimum qualifications: 15+ years of experience designing and developing enterprise-level software solutions. 10+ years of experience with large volume data processing and Big Data tools such as Apache Spark, Scala, and Hadoop . 5+ years of experience developing Scala/Java applications and microservices using Spring Boot. 5+ years of experience working with SQL and relational databases. 2+ years of experience working within Agile/Scrum environments. Preferred qualifications: Extended experience with Amazon cloud computing infrastructure. Background in the educational technology domain. Summary: The Principal Software Engineer is responsible for bring ing new digital products to market and enhance existing ones by leveraging your expertise in software development with state-of-the-art industry frameworks collaborating closely with product management on engineering teams. Essential duties/responsibilities: Deliver large software development initiatives collaborati ng with product owners , to deliver i -Ready experiences. Partner with architects and distinguished engineers to d e sign robust software services and modules ensuring seamless customer experience at scale. Operate with a continuous improvement mindset , conducting code and design reviews , as well as driving adoption of state-of-the-art productivity tools. Model and champion efficiencies in software development and delivery practices. Stay abreast of industry best practices and drive adoption of applicable state-of-the-art frameworks and tools. Communicate clearly and consistently utilizing verbal and written forms , influencing alignment on cross-team goals. Champion personal and peer development through mentoring . Engage in proactive learning to improve functional as well as technical knowledge. Required Job Skills and Abilities: Strong communication skills. Expert level knowledge of multiple frameworks and languages . Extensive e xperience delivering cloud hosted software services at scale. Experience working in an Agile environment, including experience with Scrum. Required Education and Experience: Bachelor s in computer science preferred 10 + years of experience in web application development. Travel: If remote, less than twice per quarter to Company offices only. Working Environment: N/A
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
40098 Jobs | Dublin
Wipro
19606 Jobs | Bengaluru
Accenture in India
17147 Jobs | Dublin 2
EY
15891 Jobs | London
Uplers
11674 Jobs | Ahmedabad
Amazon
10661 Jobs | Seattle,WA
Oracle
9452 Jobs | Redwood City
IBM
9401 Jobs | Armonk
Accenture services Pvt Ltd
8681 Jobs |
Capgemini
7992 Jobs | Paris,France