Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
5.0 - 8.0 years
7 - 10 Lacs
Bengaluru
Work from Office
As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and Azure Cloud Data Platform Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark and Hive, Hbase or other NoSQL databases on Azure Cloud Data Platform or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / Azure eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Total 5-8 years of experience in Data Management (DW, DL, Data Platform, Lakehouse) and Data Engineering skills Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala; Minimum 3 years of experience on Cloud Data Platforms on Azure; Experience in DataBricks / Azure HDInsight / Azure Data Factory, Synapse, SQL Server DB Good to excellent SQL skills Preferred technical and professional experience Certification in Azure and Data Bricks or Cloudera Spark Certified developers Experience in DataBricks / Azure HDInsight / Azure Data Factory, Synapse, SQL Server DB Knowledge or experience of Snowflake will be an added advantage
Posted 1 day ago
5.0 - 8.0 years
7 - 10 Lacs
Bengaluru
Work from Office
As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and Azure Cloud Data Platform Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark and Hive, Hbase or other NoSQL databases on Azure Cloud Data Platform or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / Azure eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Total 5-8 years of experience in Data Management (DW, DL, Data Platform, Lakehouse) and Data Engineering skills Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala; Minimum 3 years of experience on Cloud Data Platforms on Azure; Experience in DataBricks / Azure HDInsight / Azure Data Factory, Synapse, SQL Server DB Good to excellent SQL skills Preferred technical and professional experience Certification in Azure and Data Bricks or Cloudera Spark Certified developers Experience in DataBricks / Azure HDInsight / Azure Data Factory, Synapse, SQL Server DB Knowledge or experience of Snowflake will be an added advantage
Posted 1 day ago
5.0 - 10.0 years
7 - 12 Lacs
Mumbai
Work from Office
As an Data Engineer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets. In this role, your responsibilities may include Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise search applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviours. Build teams or writing programs to cleanse and integrate data in an efficient and reusable manner, developing predictive or prescriptive models, and evaluating modeling results Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise We are seeking a skilled Azure Data Engineer with 5+ years of experience Including 3+ years of hands-on experience with ADF/Databricks The ideal candidate Data bricks,Data Lake, Phyton programming skills. The candidate will also have experience for deploying to data bricks. Familiarity with Azure Data Factory Preferred technical and professional experience Good communication skills. 3+ years of experience with ADF/DB/DataLake. Ability to communicate results to technical and non-technical audiences
Posted 1 day ago
5.0 - 10.0 years
7 - 12 Lacs
Mumbai
Work from Office
As an Data Engineer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets. In this role, your responsibilities may include Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise search applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviours. Build teams or writing programs to cleanse and integrate data in an efficient and reusable manner, developing predictive or prescriptive models, and evaluating modeling results Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise We are seeking a skilled Azure Data Engineer with 5+ years of experience Including 3+ years of hands-on experience with ADF/Databricks The ideal candidate Data bricks,Data Lake, Phyton programming skills. The candidate will also have experience for deploying to data bricks. Familiarity with Azure Data Factory Preferred technical and professional experience Good communication skills. 3+ years of experience with ADF/DB/DataLake. Ability to communicate results to technical and non-technical audiences
Posted 1 day ago
5.0 - 7.0 years
7 - 9 Lacs
Bengaluru
Work from Office
Work with broader team to build, analyze and improve the AI solutions. You will also work with our software developers in consuming different enterprise applications Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Resource should have 5-7 years of experience. Sound knowledge of Python and should know how to use the ML related services. Proficient in Python with focus on Data Analytics Packages. Strategy Analyse large, complex data sets and provide actionable insights to inform business decisions. Strategy Design and implementing data models that help in identifying patterns and trends. Collaboration Work with data engineers to optimize and maintain data pipelines. Perform quantitative analyses that translate data into actionable insights and provide analytical, data-driven decision-making. Identify and recommend process improvements to enhance the efficiency of the data platform. Develop and maintain data models, algorithms, and statistical models Preferred technical and professional experience Experience with conversation analytics. Experience with cloud technologies Experience with data exploration tools such as Tableu
Posted 1 day ago
5.0 - 7.0 years
7 - 9 Lacs
Gurugram
Work from Office
Skilled Multiple GCP services - GCS, Big Query, Cloud SQL, Dataflow, Pub/Sub, Cloud Run, Workflow, Composer, Error reporting, Log explorer etc. Must have Python and SQL work experience & Proactive, collaborative and ability to respond to critical situation Ability to analyse data for functional business requirements & front face customer Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise 5 to 7 years of relevant experience working as technical analyst with Big Query on GCP platform. Skilled in multiple GCP services - GCS, Cloud SQL, Dataflow, Pub/Sub, Cloud Run, Workflow, Composer, Error reporting, Log explorer You love collaborative environments that use agile methodologies to encourage creative design thinking and find innovative ways to develop with cutting edge technologies Ambitious individual who can work under their own direction towards agreed targets/goals and with creative approach to work Preferred technical and professional experience Create up to 3 bullets MA Intuitive individual with an ability to manage change and proven time management Proven interpersonal skills while contributing to team effort by accomplishing related results as needed Up-to-date technical knowledge by attending educational workshops, reviewing publications (encouraging then to focus on required skills
Posted 1 day ago
5.0 - 7.0 years
7 - 9 Lacs
Gurugram
Work from Office
Skilled Multiple GCP services - GCS, Big Query, Cloud SQL, Dataflow, Pub/Sub, Cloud Run, Workflow, Composer, Error reporting, Log explorer etc. Must have Python and SQL work experience & Proactive, collaborative and ability to respond to critical situation Ability to analyse data for functional business requirements & front face customer Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise 5 to 7 years of relevant experience working as technical analyst with Big Query on GCP platform. Skilled in multiple GCP services - GCS, Cloud SQL, Dataflow, Pub/Sub, Cloud Run, Workflow, Composer, Error reporting, Log explorer You love collaborative environments that use agile methodologies to encourage creative design thinking and find innovative ways to develop with cutting edge technologies Ambitious individual who can work under their own direction towards agreed targets/goals and with creative approach to work Preferred technical and professional experience Create up to 3 bullets MA Intuitive individual with an ability to manage change and proven time management Proven interpersonal skills while contributing to team effort by accomplishing related results as needed Up-to-date technical knowledge by attending educational workshops, reviewing publications (encouraging then to focus on required skills
Posted 1 day ago
5.0 - 7.0 years
7 - 9 Lacs
Bengaluru
Work from Office
Skilled Multiple GCP services - GCS, BigQuery, Cloud SQL, Dataflow, Pub/Sub, Cloud Run, Workflow, Composer, Error reporting, Log explorer etc. Must have Python and SQL work experience & Proactive, collaborative and ability to respond to critical situation Ability to analyse data for functional business requirements & front face customer Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise 5 to 7 years of relevant experience working as technical analyst with Big Query on GCP platform. Skilled in multiple GCP services - GCS, Cloud SQL, Dataflow, Pub/Sub, Cloud Run, Workflow, Composer, Error reporting, Log explorer You love collaborative environments that use agile methodologies to encourage creative design thinking and find innovative ways to develop with cutting edge technologies Ambitious individual who can work under their own direction towards agreed targets/goals and with creative approach to work Preferred technical and professional experience Create up to 3 bullets maxitive individual with an ability to manage change and proven time management Proven interpersonal skills while contributing to team effort by accomplishing related results as needed Up-to-date technical knowledge by attending educational workshops, reviewing publications (encouraging then to focus on required skills)
Posted 1 day ago
15.0 - 20.0 years
17 - 22 Lacs
Bengaluru
Work from Office
Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand their data needs and provide effective solutions, ensuring that the data infrastructure is robust and scalable to meet the demands of the organization. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior team members to enhance their skills and knowledge in data engineering.- Continuously evaluate and improve data processes to enhance efficiency and effectiveness. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform.- Experience with data integration and ETL tools.- Strong understanding of data modeling and database design principles.- Familiarity with cloud platforms and services related to data storage and processing.- Knowledge of programming languages such as Python or Scala for data manipulation. Additional Information:- The candidate should have minimum 5 years of experience in Databricks Unified Data Analytics Platform.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 1 day ago
15.0 - 20.0 years
17 - 22 Lacs
Bengaluru
Work from Office
Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand their data needs and provide effective solutions, ensuring that the data infrastructure is robust and scalable to meet the demands of the organization. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior team members to enhance their skills and knowledge in data engineering.- Continuously evaluate and improve data processes to enhance efficiency and effectiveness. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform.- Experience with data integration and ETL tools.- Strong understanding of data modeling and database design principles.- Familiarity with cloud platforms and services related to data storage and processing.- Knowledge of programming languages such as Python or Scala for data manipulation. Additional Information:- The candidate should have minimum 5 years of experience in Databricks Unified Data Analytics Platform.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 1 day ago
2.0 - 3.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Requirements Minimum of 2-3 years of FullStack Software Development experience in building large-scale mission-critical applications. Strong foundation in computer science, with strong competencies in data structures, algorithms, and software design optimized for building highly distributed and parallelized systems. Proficiency in one or more programming languages - Java and Python. Strong hands-on experience in MEAN, MERN, Core Java, J2EE technologies, Microservices, Spring, Hibernate, SQL, REST APIs. Experience in web development using one of the technologies, like Angular or React, etc. Experience with one or more of the following database technologies: SQL Server, Postgres, MySQL, and NoSQL such as HBase, MongoDB, and DynamoDB. Strong problem-solving skills to deep dive, brainstorm, and choose the best solution approach. Experience with AWS Services like EKS, ECS, S3 EC2 RDS, Redshift, and Github/Stash, CI/CD Pipelines, Maven, Jenkins, Security Tools, Kubernetes/VMs/Linux, Monitoring, Alerting, etc. Experience in Agile development is a big plus. Excellent presentation, collaboration, and communication skills required. Result-oriented and experienced in leading broad initiatives and teams. Knowledge of Big Data technologies like Hadoop and Hive, Spark, Kafka, etc. would be an added advantage. Bachelor's or Master's degree in mathematics, Computer Science. 1-4 years of experience as a FullStackEngineer. Proven analytic skills and designing scalable applications. This job was posted by Vivek Chhikara from Protium. Show more Show less
Posted 1 day ago
3.0 years
0 Lacs
Bangalore Urban, Karnataka, India
On-site
About Healthify: Healthify is a leading fitness and nutrition platform revolutionizing the way people manage their health. With cutting-edge AI tools and a vast database of Indian foods, we provide personalized nutrition and fitness solutions to help users achieve their health goals. We are looking for a dynamic Senior Business Analyst to drive the next phase of our growth and innovation. Role Overview: As a Business Analyst you will collaborate with business and product managers to leverage advanced analytics tools and techniques, driving impactful insights and strategies. Your responsibilities include conducting high-level data analyses, developing analytical frameworks, and creating detailed reports. You will independently manage end-to-end analysis, build machine learning models, and align analytics with product goals. Additionally, you will oversee hiring, leading, and mentoring junior analysts. K Key Responsibilities: You will work closely with business managers and product managers to create an enormous impact using the latest analytics tools and techniques. Work on high-level projects and deep dives to figure out insights from the data and build analytical frameworks and presentations. Be able to work on the end to end analysis independently. Build analytics reporting by making sure the tracking is done at the right frequency for the KPIs. Build ML models to mature a problem from Analytics state to Data Science. Co-own the product goals and drive analytics in the right direction to achieve the goals. You will be expected to be involved in hiring, leading, and mentoring junior analysts in the team. Qualifications: Preferably 3+ years of analytical experience. Tier 1 engineering college graduate / MBA. Extremely strong analytical orientation and product understanding. Demonstrated experience in descriptive, predictive analytics and expertise in SQL and Python/R. Experience with Reporting and Business Intelligence tools like Tableau, Qlikview, Looker, etc. Experience with Clevertap, Mixpanel, Amplitude, etc. Exposure with distributed analytics processing technologies is desired (e.g. Hive, Spark). Ability to understand data architecture and data engineering pipelines is good to have. Show more Show less
Posted 1 day ago
5.0 - 8.0 years
1 Lacs
Hyderābād
On-site
Assistant / Deputy Manager Hyderabad B.E./MCA/B.Tech/M.sc (I.T.) 25-35 Experience & Role: Experience – above 5-8 years relevant experience. Role - We are looking for a technically strong and detail-oriented professional to manage and support our Cloudera Data Platform (CDP) ecosystem. The ideal candidate should possess in-depth expertise in distributed data processing frameworks and hands-on experience with core Hadoop components. This role requires both operational excellence and technical depth, with an emphasis on optimizing data processing pipelines and maintaining high system availability. Job Description: - Administer and maintain the Cloudera Data Platform (CDP) across all environments (dev/test/prod) - Strong expertise in Big Data ecosystem like Spark, Hive, Sqoop, HDFS, Map Reduce, Oozie, Yarn, HBase, Nifi. - Develop and optimize complex Hive queries, including the use of analytical functions for reporting and data transformation. - Create custom UDFs in Hive to handle specific business logic and integration needs. - Ensure efficient data ingestion and movement using Sqoop, Nifi, and Oozie workflows. - Work with various data formats (CSV, TSV, Parquet, ORC, JSON, AVRO) and compression techniques (Gzip, Snappy) to maximize performance and storage. - Monitor and tune performance of YARN and Spark applications for optimal resource utilization. - In depth Knowledge on Architecture of Distributed Systems and Parallel Computing. Internal - Good knowledge in Oracle PL/SQL and shell scripting. - Strong problem-solving and analytical thinking. - Effective communication and documentation skills. - Ability to collaborate across multi-disciplinary teams. - Self-driven with the ability to manage multiple priorities under tight timelines. Job Types: Full-time, Permanent Pay: Up to ₹100,000.00 per year Schedule: Day shift Monday to Friday Work Location: In person
Posted 1 day ago
7.0 years
0 Lacs
Hyderābād
On-site
Digital Solutions Consultant I - HYD015Q Company : Worley Primary Location : IND-AP-Hyderabad Job : Digital Solutions Schedule : Full-time Employment Type : Agency Contractor Job Level : Experienced Job Posting : Jun 16, 2025 Unposting Date : Jul 16, 2025 Reporting Manager Title : Senior General Manager : We deliver the world’s most complex projects. Work as part of a collaborative and inclusive team. Enjoy a varied & challenging role. Building on our past. Ready for the future Worley is a global professional services company of energy, chemicals and resources experts headquartered in Australia. Right now, we’re bridging two worlds as we accelerate to more sustainable energy sources, while helping our customers provide the energy, chemicals, and resources that society needs now. We partner with our customers to deliver projects and create value over the life of their portfolio of assets. We solve complex problems by finding integrated data-centric solutions from the first stages of consulting and engineering to installation and commissioning, to the last stages of decommissioning and remediation. Join us and help drive innovation and sustainability in our projects. The Role As a Digital Solutions Consultant with Worley, you will work closely with our existing team to deliver projects for our clients while continuing to develop your skills and experience etc. We are looking for a skilled Data Engineer to join our Digital Customer Solutions team. The ideal candidate should have experience in cloud computing and big data technologies. As a Data Engineer, you will be responsible for designing, building, and maintaining scalable data solutions that can handle large volumes of data. You will work closely with stakeholders to ensure that the data is accurate, reliable, and easily accessible. Responsibilities: Design, build, and maintain scalable data pipelines that can handle large volumes of data. Document design of proposed solution including structuring data (data modelling applying different techniques including 3-NF and Dimensional modelling) and optimising data for further consumption (working closely with Data Visualization Engineers, Front-end Developers, Data Scientists and ML-Engineers). Develop and maintain ETL processes to extract data from various sources (including sensor, semi-structured and unstructured, as well as structured data stored in traditional databases, file stores or from SOAP and REST data interfaces). Develop data integration patterns for batch and streaming processes, including implementation of incremental loads. Build quick porotypes and prove-of-concepts to validate assumption and prove value of proposed solutions or new cloud-based services. Define Data engineering standards and develop data ingestion/integration frameworks. Participate in code reviews and ensure all solutions are lined to architectural and requirement specifications. Develop and maintain cloud-based infrastructure to support data processing using Azure Data Services (ADF, ADLS, Synapse, Azure SQL DB, Cosmos DB). Develop and maintain automated data quality pipelines. Collaborate with cross-functional teams to identify opportunities for process improvement. Manage a team of Data Engineers. About You To be considered for this role it is envisaged you will possess the following attributes: Bachelor’s degree in Computer Science or related field. 7+ years of experience in big data technologies such as Hadoop, Spark, Hive & Delta Lake. 7+ years of experience in cloud computing platforms such as Azure, AWS or GCP. Experience in working in cloud Data Platforms, including deep understanding of scaled data solutions. Experience in working with different data integration patterns (batch and streaming), implementing incremental data loads. Proficient in scripting in Java, Windows and PowerShell. Proficient in at least one programming language like Python, Scala. Expert in SQL. Proficient in working with data services like ADLS, Azure SQL DB, Azure Synapse, Snowflake, No-SQL (e.g. Cosmos DB, Mongo DB), Azure Data Factory, Databricks or similar on AWS/GCP. Experience in using ETL tools (like Informatica IICS Data integration) is an advantage. Strong understanding of Data Quality principles and experience in implementing those. Moving forward together We want our people to be energized and empowered to drive sustainable impact. So, our focus is on a values-inspired culture that unlocks brilliance through belonging, connection and innovation. We’re building a diverse, inclusive and respectful workplace. Creating a space where everyone feels they belong, can be themselves, and are heard. And we're not just talking about it; we're doing it. We're reskilling our people, leveraging transferable skills, and supporting the transition of our workforce to become experts in today's low carbon energy infrastructure and technology. Whatever your ambition, there’s a path for you here. And there’s no barrier to your potential career success. Join us to broaden your horizons, explore diverse opportunities, and be part of delivering sustainable change. Worley takes personal data protection seriously and respects EU and local data protection laws. You can read our full Recruitment Privacy Notice Here. Please note: If you are being represented by a recruitment agency you will not be considered, to be considered you will need to apply directly to Worley.
Posted 1 day ago
7.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
We deliver the world’s most complex projects. Work as part of a collaborative and inclusive team. Enjoy a varied & challenging role. Building on our past. Ready for the future Worley is a global professional services company of energy, chemicals and resources experts headquartered in Australia. Right now, we’re bridging two worlds as we accelerate to more sustainable energy sources, while helping our customers provide the energy, chemicals, and resources that society needs now. We partner with our customers to deliver projects and create value over the life of their portfolio of assets. We solve complex problems by finding integrated data-centric solutions from the first stages of consulting and engineering to installation and commissioning, to the last stages of decommissioning and remediation. Join us and help drive innovation and sustainability in our projects. The Role As a Digital Solutions Consultant with Worley, you will work closely with our existing team to deliver projects for our clients while continuing to develop your skills and experience etc. We are looking for a skilled Data Engineer to join our Digital Customer Solutions team. The ideal candidate should have experience in cloud computing and big data technologies. As a Data Engineer, you will be responsible for designing, building, and maintaining scalable data solutions that can handle large volumes of data. You will work closely with stakeholders to ensure that the data is accurate, reliable, and easily accessible. Responsibilities Design, build, and maintain scalable data pipelines that can handle large volumes of data. Document design of proposed solution including structuring data (data modelling applying different techniques including 3-NF and Dimensional modelling) and optimising data for further consumption (working closely with Data Visualization Engineers, Front-end Developers, Data Scientists and ML-Engineers). Develop and maintain ETL processes to extract data from various sources (including sensor, semi-structured and unstructured, as well as structured data stored in traditional databases, file stores or from SOAP and REST data interfaces). Develop data integration patterns for batch and streaming processes, including implementation of incremental loads. Build quick porotypes and prove-of-concepts to validate assumption and prove value of proposed solutions or new cloud-based services. Define Data engineering standards and develop data ingestion/integration frameworks. Participate in code reviews and ensure all solutions are lined to architectural and requirement specifications. Develop and maintain cloud-based infrastructure to support data processing using Azure Data Services (ADF, ADLS, Synapse, Azure SQL DB, Cosmos DB). Develop and maintain automated data quality pipelines. Collaborate with cross-functional teams to identify opportunities for process improvement. Manage a team of Data Engineers. About You To be considered for this role it is envisaged you will possess the following attributes: Bachelor’s degree in Computer Science or related field. 7+ years of experience in big data technologies such as Hadoop, Spark, Hive & Delta Lake. 7+ years of experience in cloud computing platforms such as Azure, AWS or GCP. Experience in working in cloud Data Platforms, including deep understanding of scaled data solutions. Experience in working with different data integration patterns (batch and streaming), implementing incremental data loads. Proficient in scripting in Java, Windows and PowerShell. Proficient in at least one programming language like Python, Scala. Expert in SQL. Proficient in working with data services like ADLS, Azure SQL DB, Azure Synapse, Snowflake, No-SQL (e.g. Cosmos DB, Mongo DB), Azure Data Factory, Databricks or similar on AWS/GCP. Experience in using ETL tools (like Informatica IICS Data integration) is an advantage. Strong understanding of Data Quality principles and experience in implementing those. Moving forward together We want our people to be energized and empowered to drive sustainable impact. So, our focus is on a values-inspired culture that unlocks brilliance through belonging, connection and innovation. We’re building a diverse, inclusive and respectful workplace. Creating a space where everyone feels they belong, can be themselves, and are heard. And we're not just talking about it; we're doing it. We're reskilling our people, leveraging transferable skills, and supporting the transition of our workforce to become experts in today's low carbon energy infrastructure and technology. Whatever your ambition, there’s a path for you here. And there’s no barrier to your potential career success. Join us to broaden your horizons, explore diverse opportunities, and be part of delivering sustainable change. Worley takes personal data protection seriously and respects EU and local data protection laws. You can read our full Recruitment Privacy Notice Here. Please note: If you are being represented by a recruitment agency you will not be considered, to be considered you will need to apply directly to Worley. Company Worley Primary Location IND-AP-Hyderabad Job Digital Solutions Schedule Full-time Employment Type Agency Contractor Job Level Experienced Job Posting Jun 16, 2025 Unposting Date Jul 16, 2025 Reporting Manager Title Senior General Manager Show more Show less
Posted 1 day ago
175.0 years
0 Lacs
Bengaluru South, Karnataka, India
On-site
At American Express, our culture is built on a 175-year history of innovation, shared values and Leadership Behaviors, and an unwavering commitment to back our customers, communities, and colleagues. As part of Team Amex, you'll experience this powerful backing with comprehensive support for your holistic well-being and many opportunities to learn new skills, develop as a leader, and grow your career. Here, your voice and ideas matter, your work makes an impact, and together, you will help us define the future of American Express. The Financial Data Sourcing Analyst is a critical role in Financial Reporting Quality Assurance Organization (FRQA) within Corporate Controllership, in support of the Regulatory Reporting Automation program. This role is responsible for driving the definition, gathering, exploration, and analysis of finance data and its data sources to deliver the end-to-end automation for our regulatory reporting platforms. The Data Sourcing Architect team oversees the data mapping, profiling and source to target (S2T) analysis mapping for new data requirements of our regulatory reporting automation and systems, as well as leading the coordination / orchestration of the close partnership between Product Owners, Report / Business Owners, Technology and Data Testing teams to determine data / product features of the data solution are put into production with the highest degree of confidence of the data flow and data system requirements or the deactivation / decommission of data sets of financial data systems. The Data Sourcing Analyst must be a highly analytical, well-organized, data-driven individual wit time management and a high degree of technical skills confident in presenting, highly complex data concepts and technicalities in simple terms and pragmatically to the team and associated stakeholders. How will you make an impact in this role? Key Responsibilities: Collaborate with business stakeholders to understand the data needs for regulatory reporting, Translate the business requirements into technical specifications for data solutions. Develop and implement data management strategies for Regulatory Reporting Data Domain(RRDD), Design and maintain RRDD data models to support regulatory reporting and ensure its scalable and flexible. Partner with business, upstream and Technology teams to implement data solutions for regulatory reporting, Monitor and optimize data systems for performance and efficiency Collaborate with data governance team to define standards and ensure data quality and consistency in RRDD Data sourcing gap analysis and profiling of attributes and complete source to target mapping document for regulatory reports automation Conduct data analysis on existing processes and datasets to understand and support Point of Arrival (POA) process designs including migration of RRDD tables from On Prem to BigQuery. Experience to determine portfolios, data elements and grain of data required for designing processes to review data scenarios, providing clarification on how to report on these scenarios in alignment with regulatory guidance. Support development of executable data querying algorithms using tools such as SQL, that enable validation data conformity and expected data system functionality, including replication of deterministic logic and filtering criteria for master and reference data to be used across operational and regulatory reporting processes. Identification of business requirements and development of functional requirement documentation for new data sources and attributes, including design, prototyping, testing, and implementation of report owner and regulatory requirements. Document and understand core components of solution architecture including data patterns, data-related capabilities, and standardization and conformance of disparate datasets. Minimum Qualifications 3+ years of work experience in Data Sourcing and analysis. 3+ years of work experience in Banking / Regulatory / Financial / Technology Services. Product Management, data migration, data analytics working experience is a plus. Experienced in Agile delivery concepts or other project management methodologies. Experience in data analytics, data profiling, Source to Target (S2T) data mapping, analyzing the System of Record (SOR) data and its Data Quality (DQ) rules to identify data issues in SORs. Strong SQL / NoSQL and data analysis experience, able to write / understand complex SQL, HiveQL, with hands-on Oracle SQL and Hive experience. Experience with of MS Excel, Power Query, and other analytical tools, e.g., Tableau. Experience with Python, Google Big Query, PL SQL. Critical thinking and complex problem-solving skills (data application). Excellent written and verbal communications with ability to communicate highly complex concepts and processes in simple terms and pragmatically. A self-starter, proactive team player with excellent relationship building and collaboration skills, facilitating a network of strong relationships across the organization. Preferred Qualifications Knowledge of US Regulatory Reports (Y9C, Y14, Y15, 2052a, amongst others), and general understanding of the banking products. Working exposure in data analysis financial data domains to support regulatory and analytical requirements for large scale banking/financial organizations Experience in Google Cloud capabilities We back you with benefits that support your holistic well-being so you can be and deliver your best. This means caring for you and your loved ones' physical, financial, and mental health, as well as providing the flexibility you need to thrive personally and professionally: Competitive base salaries Bonus incentives Support for financial-well-being and retirement Comprehensive medical, dental, vision, life insurance, and disability benefits (depending on location) Flexible working model with hybrid, onsite or virtual arrangements depending on role and business need Generous paid parental leave policies (depending on your location) Free access to global on-site wellness centers staffed with nurses and doctors (depending on location) Free and confidential counseling support through our Healthy Minds program Career development and training opportunities American Express is an equal opportunity employer and makes employment decisions without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran status, disability status, age, or any other status protected by law. Offer of employment with American Express is conditioned upon the successful completion of a background verification check, subject to applicable laws and regulations. Show more Show less
Posted 1 day ago
6.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your Role And Responsibilities As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and Azure Cloud Data Platform Responsibilities Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark and Hive, Hbase or other NoSQL databases on Azure Cloud Data Platform or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / Azure eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Preferred Education Master's Degree Required Technical And Professional Expertise Total 6 - 7+ years of experience in Data Management (DW, DL, Data Platform, Lakehouse) and Data Engineering skills Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala; Minimum 3 years of experience on Cloud Data Platforms on Azure; Experience in DataBricks / Azure HDInsight / Azure Data Factory, Synapse, SQL Server DB Good to excellent SQL skills Preferred Technical And Professional Experience Certification in Azure and Data Bricks or Cloudera Spark Certified developers Show more Show less
Posted 1 day ago
4.0 years
0 Lacs
Kochi, Kerala, India
On-site
Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your Role And Responsibilities As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and Azure Cloud Data Platform Responsibilities Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark and Hive, Hbase or other NoSQL databases on Azure Cloud Data Platform or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / Azure eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Preferred Education Master's Degree Required Technical And Professional Expertise Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala; Minimum 3 years of experience on Cloud Data Platforms on Azure; Experience in DataBricks / Azure HDInsight / Azure Data Factory, Synapse, SQL Server DB Good to excellent SQL skills Exposure to streaming solutions and message brokers like Kafka technologies Preferred Technical And Professional Experience Certification in Azure and Data Bricks or Cloudera Spark Certified developers Show more Show less
Posted 1 day ago
14.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Requirements Description and Requirements Position Summary: A highly skilled Big Data (Hadoop) Administrator responsible for the installation, configuration, engineering, and architecture of Cloudera Data Platform (CDP) and Cloudera Flow Management (CFM) streaming clusters on RedHat Linux. Strong expertise in DevOps practices, scripting, and infrastructure-as-code for automating and optimizing operations is highly desirable. Experience in collaborating with cross-functional teams, including application development, infrastructure, and operations, is highly preferred. Job Responsibilities: Manages the design, distribution, performance, replication, security, availability, and access requirements for large and complex Big Data clusters. Designs and develops the architecture and configurations to support various application needs; implements backup, recovery, archiving, conversion strategies, and performance tuning; manages job scheduling, application release, cluster changes, and compliance. Identifies and resolves issues utilizing structured tools and techniques. Provides technical assistance and mentoring to staff in all aspects of Hadoop cluster management; consults and advises application development teams on security, query optimization, and performance. Writes scripts to automate routine cluster management tasks and documents maintenance processing flows per standards. Implement industry best practices while performing Hadoop cluster administration tasks. Works in an Agile model with a strong understanding of Agile concepts. Collaborates with development teams to provide and implement new features. Debugs production issues by analyzing logs directly and using tools like Splunk and Elastic. Address organizational obstacles to enhance processes and workflows. Adopts and learns new technologies based on demand and supports team members by coaching and assisting. Education: Bachelor’s degree in computer science, Information Systems, or another related field with 14+ years of IT and Infrastructure engineering work experience. Experience: 14+ Years Total IT experience & 10+ Years relevant experience in Big Data database Technical Skills: Big Data Platform Management : Big Data Platform Management: Expertise in managing and optimizing the Cloudera Data Platform, including components such as Apache Hadoop (YARN and HDFS), Apache HBase, Apache Solr , Apache Hive, Apache Kafka, Apache NiFi , Apache Ranger, Apache Spark, as well as JanusGraph and IBM BigSQL . Data Infrastructure & Security : Proficient in designing and implementing robust data infrastructure solutions with a strong focus on data security, utilizing tools like Apache Ranger and Kerberos. Performance Tuning & Optimization : Skilled in performance tuning and optimization of big data environments, leveraging advanced techniques to enhance system efficiency and reduce latency. Backup & Recovery : Experienced in developing and executing comprehensive backup and recovery strategies to safeguard critical data and ensure business continuity. Linux & Troubleshooting : Strong knowledge of Linux operating systems , with proven ability to troubleshoot and resolve complex technical issues, collaborating effectively with cross-functional teams. DevOps & Scripting : Proficient in scripting and automation using tools like Ansible, enabling seamless integration and automation of cluster operations. Experienced in infrastructure-as-code practices and observability tools such as Elastic. Agile & Collaboration : Strong understanding of Agile SAFe for Teams, with the ability to work effectively in Agile environments and collaborate with cross-functional teams. ITSM Process & Tools : Knowledgeable in ITSM processes and tools such as ServiceNow. Other Critical Requirements: Automation and Scripting : Proficiency in automation tools and programming languages such as Ansible and Python to streamline operations and improve efficiency. Analytical and Problem-Solving Skills : Strong analytical and problem-solving abilities to address complex technical challenges in a dynamic enterprise environment. 24x7 Support : Ability to work in a 24x7 rotational shift to support Hadoop platforms and ensure high availability. Team Management and Leadership : Proven experience managing geographically distributed and culturally diverse teams, with strong leadership, coaching, and mentoring skills. Communication Skills : Exceptional written and oral communication skills, with the ability to clearly articulate technical and functional issues, conclusions, and recommendations to stakeholders at all levels. Stakeholder Management : Prior experience in effectively managing both onshore and offshore stakeholders, ensuring alignment and collaboration across teams. Business Presentations : Skilled in creating and delivering impactful business presentations to communicate key insights and recommendations. Collaboration and Independence : Demonstrated ability to work independently as well as collaboratively within a team environment, ensuring successful project delivery in a complex enterprise setting. About MetLife Recognized on Fortune magazine's list of the 2025 "World's Most Admired Companies" and Fortune World’s 25 Best Workplaces™ for 2024, MetLife , through its subsidiaries and affiliates, is one of the world’s leading financial services companies; providing insurance, annuities, employee benefits and asset management to individual and institutional customers. With operations in more than 40 markets, we hold leading positions in the United States, Latin America, Asia, Europe, and the Middle East. Our purpose is simple - to help our colleagues, customers, communities, and the world at large create a more confident future. United by purpose and guided by empathy, we’re inspired to transform the next century in financial services. At MetLife, it’s #AllTogetherPossible . Join us! Show more Show less
Posted 1 day ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Responsibilities: We are seeking an experienced Data Scientist to lead the development of a Data Science program . You will work closely with various stakeholders to derive deep industry knowledge across paper, water, leather, and performance chemical industries . You will help develop a data strategy for the company including collection of the right data, creation of the data science project portfolio, partnering with external providers , and augmenting capabilities with additional internal hires. A large part of the job is communicating and developing relationships with key stakeholders and subject matter experts to tee up proofs of concept (PoC) projects to demonstrate how data science can solve old problems in unique and novel ways . You will not have a large internal team to rely on, at least initially, so individual expertise, breadth of data science knowledge , and ability to partner with external companies will be essential for success. In addition to the pure data science problems, you will be working closely with a multi-disciplinary team consisting of sensor scientists, software engineers, network engineers, mechanical/electrical engineers, and chemical engineers in the development and deployment of IoT solutions . Basic Qualification: Bachelor’s degree in a quantitative field such as Data Science, Statistics, Applied Mathematics, Physics, Engineering, or Computer Science 5+ years of relevant working experience in an analytical role involving data extraction, analysis, and visualization and expertise in the following areas: Expertise in one or more programming languages : R, Python, MATLAB, JMP, Minitab, Java, C++, Scala Key libraries such as Sklearn, XGBoost, GLMNet, Dplyr, ggplot, RShiny Experience and knowledge of data mining algorithms including supervised and unsupervised machine learning techniques in areas such as Gradient Boosting, Decision Trees, Multivariate Regressions, Logistic Regression, Neural Network, Random Forest, SVM, Naive Bayes, Time Series, Optimization Microsoft IoT/data science toolkit : Azure Machine Learning, Datalake, Datalake Analytics, Workbench, IoT Hub, Stream Analytics, CosmosDB, Time Series Insights, Power BI Data querying languages : SQL, Hadoop/Hive A demonstrated record of success with a verifiable portfolio of problems tackled Preferred Qualifications: Master’s or PhD degree in a quantitative field such as Data Science, Statistics, Applied Mathematics, Physics, Engineering, or Computer Science Experience in the specialty chemicals sector or similar industry Background in engineering, especially Chemical Engineering Experience starting up a data science program Experience working with global stakeholders Experience working in a start-up environment, preferably in an IoT company Knowledge in quantitative modeling tools and statistical analysis Personality Traits: A strong business focus, ownership, and inner self-drive to develop data science solutions to real-world customers with tangible impact. Ability to collaborate effectively with multi-disciplinary and passionate team members . Ability to communicate with a diverse set of stakeholders . Strong planning and organization skills , with the ability to manage multiple complex projects . A life-long learner who constantly updates skills. Show more Show less
Posted 1 day ago
0.6 - 1.6 years
0 Lacs
Bengaluru East, Karnataka, India
On-site
Visa is a world leader in payments and technology, with over 259 billion payments transactions flowing safely between consumers, merchants, financial institutions, and government entities in more than 200 countries and territories each year. Our mission is to connect the world through the most innovative, convenient, reliable, and secure payments network, enabling individuals, businesses, and economies to thrive while driven by a common purpose – to uplift everyone, everywhere by being the best way to pay and be paid. Make an impact with a purpose-driven industry leader. Join us today and experience Life at Visa. Job Description Work collaboratively with Data Analyst, Data Scientists Software Engineers and cross-functional partners to design and deploy data pipelines to deliver analytical solution. Responsible for building data pipelines, data model, data marts, data warehouse including OLAP cube in multidimensional data model with proficiency / conceptual understanding of PySpark and SQL scripting. Responsible for the design, development, testing, implementation and support functional semantic data marts using various modeling techniques from underlying data stores/data warehouse and facilitate Business Intelligence Data Solutions Experience in building reports, dashboards, scorecards & visualization using Tableau/ Power BI and other data analysis techniques to collect, explore, and extract insights from structured and unstructured data. Responsible for AI/ML model Utilizing machine learning, statistical methods, data mining, forecasting and predictive modeling techniques. Following Dev Ops Model, Agile implementation, CICD method of deployment & JIRA creation / management for projects. Define and build technical/data documentation and experience with code version control systems (for e.g., git). Assist owner with periodic evaluation of next generation & modernization of platform. Exhibit Leadership Principles such as Accountability & Ownership of High Standards: Given the criticality & sensitivity of data . Customer Focus : Going Above & Beyond in finding innovative solution and product to best serve the business needs and there-by Visa. This is a hybrid position. Expectation of days in office will be confirmed by your hiring manager. Qualifications Basic Qualifications • Bachelors degree or •0.6-1.6 years of work experience with a Bachelor’s Degree or Master's Degree in computer / information science with relevant work experience in IT industry •Enthusiastic, energetic and self-learning candidates with loads of curiosity and flexibility. •Proven hands-on capability in the development of data pipelines and data engineering. •Experience in creating data-driven business solutions and solving data problems using technologies such as Hadoop, Hive, and Spark. •Ability to program in one or more scripting languages such as Python and one or more programming languages such as Java or Scala. •Familiarity with AI-centric libraries like TensorFlow, PyTorch, and Keras. •Familiarity with machine learning algorithms and statistical models is beneficial. •Critical ability to interpret complex data and provide actionable insights. This encompasses statistical analysis, predictive modeling, and data visualization. •Extended experience in Agile Release Management practices, governance, and planning. •Strong leadership skills with demonstrated ability to lead global, cross-functional teams. Additional Information Visa is an EEO Employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability or protected veteran status. Visa will also consider for employment qualified applicants with criminal histories in a manner consistent with EEOC guidelines and applicable local law. Show more Show less
Posted 1 day ago
2.0 - 4.0 years
6 - 10 Lacs
Pune
Hybrid
So, what’s the role all about? We are looking for a highly driven and technically skilled Software Engineer to lead the integration of various Content Management Systems with AWS Knowledge Hub, enabling advanced Retrieval-Augmented Generation (RAG) search across heterogeneous customer data—without requiring data duplication. This role will also be responsible for expanding the scope of Knowledge Hub to support non-traditional knowledge items and enhance customer self-service capabilities. You will work at the intersection of AI, search infrastructure, and developer experience to make enterprise knowledge instantly accessible, actionable, and AI-ready. How will you make an impact? Integrate CMS with AWS Knowledge Hub to allow seamless RAG-based search across diverse data types—eliminating the need to copy data into Knowledge Hub instances. Extend Knowledge Hub capabilities to ingest and index non-knowledge assets, including structured data, documents, tickets, logs, and other enterprise sources. Build secure, scalable connectors to read directly from customer-maintained indices and data repositories. Enable self-service capabilities for customers to manage content sources using App Flow, Tray.ai, configure ingestion rules, and set up search parameters independently. Collaborate with the NLP/AI team to optimize relevance and performance for RAG search pipelines. Work closely with product and UX teams to design intuitive, powerful experiences around self-service data onboarding and search configuration. Implement data governance, access control, and observability features to ensure enterprise readiness. Have you got what it takes? Proven experience with search infrastructure, RAG pipelines, and LLM-based applications. 2+ Years’ hands-on experience with AWS Knowledge Hub, AppFlow, Tray.ai, or equivalent cloud-based indexing/search platforms. Strong backend development skills (Python, Typescript/NodeJS, .NET/Java) and familiarity with building and consuming REST APIs. Infrastructure as a code (IAAS) service like AWS Cloud formation, CDK knowledge Deep understanding of data ingestion pipelines, index management, and search query optimization. Experience working with unstructured and semi-structured data in real-world enterprise settings. Ability to design for scale, security, and multi-tenant environment. What’s in it for you? Join an ever-growing, market disrupting, global company where the teams – comprised of the best of the best – work in a fast-paced, collaborative, and creative environment! As the market leader, every day at NICE is a chance to learn and grow, and there are endless internal career opportunities across multiple roles, disciplines, domains, and locations. If you are passionate, innovative, and excited to constantly raise the bar, you may just be our next NICEr! Enjoy NICE-FLEX! At NICE, we work according to the NICE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere. Reporting into: Tech Manager, Engineering, CX Role Type: Individual Contributor
Posted 1 day ago
3.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Are you a passionate Spark and Scala developer looking for an exciting opportunity to work on cutting-edge big data projects? Look no further! Delhivery is seeking a talented and motivated Spark & Scala Expert to join our dynamic team. Responsibilities: Develop and optimize Spark applications to process large-scale data efficiently Collaborate with cross-functional teams to design and implement data-driven solutions Troubleshoot and resolve performance issues in Spark jobs Stay up-to-date with the latest trends and advancements in Spark and Scala technologies. Requirements: Proficient in Redshift, data pipelines, Kafka, Real-time streaming, connectors, etc 3+ years of professional experience with Big Data systems, pipelines, and data processing Strong experience with Apache Spark, Spark Streaming, and Spark SQL Solid understanding of distributed systems, Databases, System design, and big data processing framework Familiarity with Hadoop ecosystem components (HDFS, Hive, HBase) is a plus Show more Show less
Posted 1 day ago
4.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Greetings from TCS!!! **TCS is Hiring for Hadoop Admin** Walkin Interview for Hadoop Admin in Hyderabad Walkin Interview Date: 21st June 2024(Saturday) Role: Hadoop Admin Desired Experience: 4- 10 Years Job Location: Hyderabad Job Description: 4+ Years of working experience in Hadoop and must have good exposure to Hive, Impala and Spark. Exposure to and strong working knowledge on distributed systems, Yarn, cluster Size(nodes, memory) Exposure to and strong working experience in Sqoop – Especially how to handle large volume by splitting into multiple chunks. Must have good work experience in File to Hadoop ingestion. Must have basic Understanding of Types of Serdes and storage(Parquet, Avro, Orc etc.) Must have good knowledge in SQL basics - Joins, Rank, Scenario based. Ability to grasp the ‘big picture’ of a solution by considering all potential options in impacted area. Aptitude to understand and adapt to newer technologies. Experience in managing and leading small development teams in an agile environment. The ability to work with teammates in a collaborative manner to achieve a mission. Presentation skills to prepare and present to large and small groups on technical and functional topics. Date of Walk-In: 21st June 2025 (Saturday) Time: 9:30 AM to 12:30 PM Location: Hyderabad Venue: TCS Deccan Park, Plot No.1, Hitech City Main Rd, Software Units Layout, HUDA Techno Enclave, Madhapur, Hyderabad, Telangana 500081 Show more Show less
Posted 1 day ago
2.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job description: Job Description Do Research, design, develop, and modify computer vision and machine learning. algorithms and models, leveraging experience with technologies such as Caffe, Torch, or TensorFlow. - Shape product strategy for highly contextualized applied ML/AI solutions by engaging with customers, solution teams, discovery workshops and prototyping initiatives. - Help build a high-impact ML/AI team by supporting recruitment, training and development of team members. - Serve as evangelist by engaging in the broader ML/AI community through research, speaking/teaching, formal collaborations and/or other channels. Knowledge & Abilities: - Designing integrations of and tuning machine learning & computer vision algorithms - Research and prototype techniques and algorithms for object detection and recognition - Convolutional neural networks (CNN) for performing image classification and object detection. - Familiarity with Embedded Vision Processing systems - Open source tools & platforms - Statistical Modeling, Data Extraction, Analysis, - Construct, train, evaluate and tune neural networks Mandatory Skills: One or more of the following: Java, C++, Python Deep Learning frameworks such as Caffe OR Torch OR TensorFlow, and image/video vision library like OpenCV, Clarafai, Google Cloud Vision etc Supervised & Unsupervised Learning Developed feature learning, text mining, and prediction models (e.g., deep learning, collaborative filtering, SVM, and random forest) on big data computation platform (Hadoop, Spark, HIVE, and Tableau) *One or more of the following: Tableau, Hadoop, Spark, HBase, Kafka Experience: - 2-5 years of work or educational experience in Machine Learning or Artificial Intelligence - Creation and application of Machine Learning algorithms to a variety of real-world problems with large datasets. - Building scalable machine learning systems and data-driven products working with cross functional teams - Working w/ cloud services like AWS, Microsoft, IBM, and Google Cloud - Working w/ one or more of the following: Natural Language Processing, text understanding, classification, pattern recognition, recommendation systems, targeting systems, ranking systems or similar Nice to Have: - Contribution to research communities and/or efforts, including publishing papers at conferences such as NIPS, ICML, ACL, CVPR, etc. Education: BA/BS (advanced degree preferable) in Computer Science, Engineering or related technical field or equivalent practical experience Wipro is an Equal Employment Opportunity employer and makes all employment and employment-related decisions without regard to a person's race, sex, national origin, ancestry, disability, sexual orientation, or any other status protected by applicable law Product and Services Sales Manager ͏ ͏ ͏ ͏ Mandatory Skills: Google Gen AI . Experience: 3-5 Years . Reinvent your world. We are building a modern Wipro. We are an end-to-end digital transformation partner with the boldest ambitions. To realize them, we need people inspired by reinvention. Of yourself, your career, and your skills. We want to see the constant evolution of our business and our industry. It has always been in our DNA - as the world around us changes, so do we. Join a business powered by purpose and a place that empowers you to design your own reinvention. Come to Wipro. Realize your ambitions. Applications from people with disabilities are explicitly welcome. Show more Show less
Posted 1 day ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Hive is a popular data warehousing tool used for querying and managing large datasets in distributed storage. In India, the demand for professionals with expertise in Hive is on the rise, with many organizations looking to hire skilled individuals for various roles related to data processing and analysis.
These cities are known for their thriving tech industries and offer numerous opportunities for professionals looking to work with Hive.
The average salary range for Hive professionals in India varies based on experience level. Entry-level positions can expect to earn around INR 4-6 lakhs per annum, while experienced professionals can earn upwards of INR 12-15 lakhs per annum.
Typically, a career in Hive progresses from roles such as Junior Developer or Data Analyst to Senior Developer, Tech Lead, and eventually Architect or Data Engineer. Continuous learning and hands-on experience with Hive are crucial for advancing in this field.
Apart from expertise in Hive, professionals in this field are often expected to have knowledge of SQL, Hadoop, data modeling, ETL processes, and data visualization tools like Tableau or Power BI.
As you explore job opportunities in the field of Hive in India, remember to showcase your expertise and passion for data processing and analysis. Prepare well for interviews by honing your skills and staying updated with the latest trends in the industry. Best of luck in your job search!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2