Jobs
Interviews

552 Hbase Jobs - Page 20

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

2.0 - 5.0 years

4 - 7 Lacs

Bengaluru

Work from Office

The Digital :BigData and Hadoop Ecosystems, Digital :Kafka role involves working with relevant technologies, ensuring smooth operations, and contributing to business objectives. Responsibilities include analysis, development, implementation, and troubleshooting within the Digital :BigData and Hadoop Ecosystems, Digital :Kafka domain.

Posted 2 months ago

Apply

8.0 - 10.0 years

10 - 12 Lacs

Mumbai, Delhi / NCR, Bengaluru

Work from Office

Big Data Engineer (Remote, Contract 6 Months+) ig Data ecosystem including Snowflake (Snowpark), Spark, MapReduce, Hadoop, and more. We are looking for a Senior Big Data Engineer with deep expertise in large-scale data processing technologies and frameworks. This is a remote, contract-based position suited for a data engineering expert with strong experience in the Big Data ecosystem including Snowflake (Snowpark), Spark, MapReduce, Hadoop, and more. #KeyResponsibilities Design, develop, and maintain scalable data pipelines and big data solutions. Implement data transformations using Spark, Snowflake (Snowpark), Pig, and Sqoop. Process large data volumes from diverse sources using Hadoop ecosystem tools. Build end-to-end data workflows for batch and streaming pipelines. Optimize data storage and retrieval processes in HBase, Hive, and other NoSQL databases. Collaborate with data scientists and business stakeholders to design robust data infrastructure. Ensure data integrity, consistency, and security in line with organizational policies. Troubleshoot and tune performance for distributed systems and applications. #MustHaveSkills in Data Engineering / Big Data Tools: Snowflake (Snowpark), Spark, MapReduce, Hadoop, Sqoop, Pig, HBase Data Ingestion & ETL, Data Pipeline Design, Distributed Computing Strong understanding of Big Data architectures & performance tuning Hands-on experience with large-scale data storage and query optimization #NiceToHave Apache Airflow / Oozie experience Knowledge of cloud platforms (AWS, Azure, or GCP) Proficiency in Python or Scala CI/CD and DevOps exposure #ContractDetails Role: Senior Big Data Engineer Locations : Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune, Remote Duration: 6+ Months (Contract) Apply via Email: navaneeta@suzva.com Contact: 9032956160 #HowToApply Send your updated resume with the subject: "Application for Remote Big Data Engineer Contract Role" Include in your email: Updated Resume Current CTC Expected CTC Current Location Notice Period / Availabilit

Posted 2 months ago

Apply

5.0 - 7.0 years

12 - 16 Lacs

Pune

Work from Office

0px> In one sentence Responsible for the design, development, modification, debugging and/or maintenance of software systems. Works on specific modules, applications or technologies, and deals with sophisticated assignments during the software development process. All you need is... 5 - 7 years of proven experience in a software engineering Specialist Familiar with Agile concepts is must. Strong Development experience and expertise in core C++, Shell Scripting Good analytical skills. Product Knowledge in Turbo charging, Rating Logic Should have knowledge of either relational / non-relational database, SQL/PG & Hbase Should be good in communication and team engagement skills. Must be comfortable working in a fast-paced environment. Any experience of Microservice architecture and DevOps knowledge would be an added advantage. Any experience on Elastic Search, Kafka, design pattern would be an added advantage. What will your job look like? You will provide technical leadership to software engineers by coaching and mentoring throughout end-to-end software development, maintenance, and lifecycle to achieve project goals to the required level of quality; promote team engagement and motivation. Provide recommendations to the software engineering manager for estimates, resource needs, breakthroughs and risks; ensure effective delegation, supervising tasks, identifying risks and handling mitigation and critical issues. Hands-on technical and functional mentorship to design, maintenance, build, integration and testing of sophisticated software components according to functional and technical design specifications; Follow software development methodologies and release processes You will analyze and report the requirements and provide impact assessment for new features or bug fixes. Make high-level design and establishes technical standards. You will represent and lead discussions related to product/ application/ modules/ team and build relationships with internal customers/partners You will implement quality processes (such as performing technical root cause analysis and outlining corrective action for given problems), measure them and takes corrective actions in case of variances and ensure all the project agreed work are completed to the required level of quality. Who are we? Why you will love this job: The chance to serve as a specialist in software and technology. You will take an active role in technical mentoring within the team. We provide stellar benefits from health to dental to paid time off and parental leave!

Posted 2 months ago

Apply

3.0 - 6.0 years

13 - 23 Lacs

Gurugram

Work from Office

Looking for an experienced Big Data Developer to develop, maintain, and optimize our big data solutions. Experience in Java, Spark, API development, Hadoop, HDFS, Hive, HBase, Kafka

Posted 2 months ago

Apply

5.0 - 10.0 years

14 - 17 Lacs

Pune

Work from Office

As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs. Your primary responsibilities include: Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements. Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization. Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Must have 5+ years exp in Big Data -Hadoop Spark -Scala ,Python Hbase, Hive Good to have Aws -S3, athena ,Dynomo DB, Lambda, Jenkins GIT Developed Python and pyspark programs for data analysis. Good working experience with python to develop Custom Framework for generating of rules (just like rules engine). Developed Python code to gather the data from HBase and designs the solution to implement using Pyspark. Apache Spark DataFrames/RDD's were used to apply business transformations and utilized Hive Context objects to perform read/write operations Preferred technical and professional experience Understanding of Devops. Experience in building scalable end-to-end data ingestion and processing solutions Experience with object-oriented and/or functional programming languages, such as Python, Java and Scala

Posted 2 months ago

Apply

8.0 - 11.0 years

35 - 37 Lacs

Kolkata, Ahmedabad, Bengaluru

Work from Office

Dear Candidate, We are hiring a Clojure Developer to build modern applications with simplicity, immutability, and strong functional design. Key Responsibilities: Write functional code using Clojure and ClojureScript Develop APIs, web apps, or backends using Ring , Compojure , or Luminus Work with immutable data structures and REPL-driven development Build scalable microservices or event-driven systems Maintain clean, modular, and expressive codebases Required Skills & Qualifications: Strong experience with Clojure , Leiningen , and core.async Familiarity with functional programming and persistent data structures Experience integrating with Java , Datomic , or Kafka Bonus: Frontend experience with Reagent or Re-frame Note: If interested, please share your updated resume and preferred time for a discussion. If shortlisted, our HR team will contact you. Kandi Srinivasa Reddy Delivery Manager Integra Technologies

Posted 2 months ago

Apply

1.0 - 5.0 years

11 - 15 Lacs

Noida

Work from Office

You will spend time in ensuring the products have best technical design and architecture; you would be supported by peers and team members in creating best-in-class technical solutions. Identify technical challenges proactively and provide effective solutions to overcome them, ensuring the successful implementation of features and functionality. Quickly respond to business needs and client facing teams demand for features, enhancements and bug fixes. Work with senior Ripik.AI tech and AI leaders in shaping and scaling the software products and Ripiks proprietary platform for hosting manufacturing focussed AI and ML software products Required Skills & Experience You should have 3+ years of experience, with deep expertise in Java, Golang & Python. Must have: Expert in coding for business logic, server scripts and application programming interfaces (APIs) Excellent in writing optimal SQL queries for backend databases; CRUD operations for databases from applications. Exposure to relational databases : MYSQL, Postgres DB, non-relational: MongoDB, Graph based databases, HBASE, Cloud native big data stores; willing to learn and ramp up on multiple database technologies . Must have at least 1 public cloud platform experience (GCP/Azure/AWS; GCP preferred). Good to have: Basic knowledge of Advanced Analytics / Machine learning / Artificial intelligence (has to collaborate with ML engineers to build backend of AI-enabled apps)

Posted 2 months ago

Apply

6.0 - 8.0 years

8 - 13 Lacs

Bengaluru

Work from Office

What youll be doing... Turn ideas into innovative products with design, development, deployment and support throughout the product/software development life cycle. Develop key software components of high-quality products. Participate in requirement gathering, idea validation, and concept prototyping. Design end and end solutions to bring ideas into innovative products. Refine product designs to provide an excellent user experience. Develop/code key software components of products. Integrate key software components with various network systems like Messaging, Calling, Address Book, Billing, and Provisioning. Work with system engineers to create system/network designs and architecture. Work with performance engineers to refine software design and codes to improve performance and capacity. Use agile and iterative methods to demo product features and refine the user experience. What were looking for... Youll need to have: Bachelor's degree or six or more years of work experience. experience in developing software products. Experience with agile software development. Advanced knowledge of application, data and infrastructure architecture disciplines. Understanding of architecture and design across all systems. Working proficiency in development toolsets. Experience with Java/J2EE, Springboot/MVC, JMS Kafka. Developing front end website architecture. Experience with developing frameworks such as ReactJS or AngularJS. Proficiency with front end languages such as HTML, CSS and JavaScript. Designing and developing APIs. Ability to gather requirements and provide solutions independently. Knowledge of Database (Oracle), Linux/Unix, NOSQL DB (e.g. MongoDB, HBase), caching mechanism, Load Balancing, multi-data center architecture. Knowledge of Microservice Architecture, Cloud Computing, Docker Containers, Restful API, EKS. Familiarity with developing and deploying services in AWS. Knowledge of Object-Oriented Design, Agile Scrum, Test Driven Development. Knowledge of Linux and troubleshooting. Even better if you have one or more of following: Good writer and verbal communication, listening, negotiation and presentation skills. Knowledge/Exposure/Expertise in Go Programming Language is a plus.

Posted 2 months ago

Apply

2.0 - 5.0 years

4 - 7 Lacs

Bengaluru

Work from Office

Job Summary Person at this position has gained significant work experience to be able to apply their knowledge effectively and deliver results. Person at this position is also able to demonstrate the ability to analyse and interpret complex problems and improve change or adapt existing methods to solve the problem. Person at this position regularly interacts with interfacing groups / customer on technical issue clarification and resolves the issues. Also participates actively in important project/ work related activities and contributes towards identifying important issues and risks. Reaches out for guidance and advice to ensure high quality of deliverables. Person at this position consistently seek opportunities to enhance their existing skills, acquire more complex skills and work towards enhancing their proficiency level in their field of specialisation. Works under limited supervision of Team Lead/ Project Manager. Roles & Responsibilities Responsible for design, coding, testing, bug fixing, documentation and technical support in the assigned area. Responsible for on time delivery while adhering to quality and productivity goals. Responsible for adhering to guidelines and checklists for all deliverable reviews, sending status report to team lead and following relevant organizational processes. Responsible for customer collaboration and interactions and support to customer queries. Expected to enhance technical capabilities by attending trainings, self-study and periodic technical assessments. Expected to participate in technical initiatives related to project and organization and deliver training as per plan and quality. Education and Experience Required Engineering graduate, MCA, etc Experience: 2-5 years Competencies Description Data engineering TCB is applicable to one who 1) Creates databases and storage for relational and non-relational data sources 2) Develops data pipelines (ETL/ ELT) to clean , transform and merge data sources into usable format 3) Creates reporting layer with pre-packaged scheduled reports , Dashboards and Charts for self-service BI 4) Has experience on cloud platforms such as AWS, Azure , GCP in implementing data workflows 5) Experience with tools like MongoDB, Hive, Hbase, Spark, Tableau, PowerBI, Python, Scala, SQL, ElasticSearch etc. Platforms- AWS, Azure , GCP Technology Standard- NA Tools- MongoDB, Hive, Hbase, Tableau, PowerBI, ElasticSearch, Qlikview Languages- Python, R, Spark,Scala, SQL Specialization- DWH, BIG DATA ENGINEERING, EDGE ANALYTICS

Posted 2 months ago

Apply

11.0 - 15.0 years

50 - 100 Lacs

Hyderabad

Work from Office

Uber is looking for Staff Software Engineer - Data to join our dynamic team and embark on a rewarding career journey Liaising with coworkers and clients to elucidate the requirements for each task. Conceptualizing and generating infrastructure that allows big data to be accessed and analyzed. Reformulating existing frameworks to optimize their functioning. Testing such structures to ensure that they are fit for use. Preparing raw data for manipulation by data scientists. Detecting and correcting errors in your work. Ensuring that your work remains backed up and readily accessible to relevant coworkers. Remaining up-to-date with industry standards and technological advancements that will improve the quality of your outputs.

Posted 2 months ago

Apply

5.0 - 8.0 years

14 - 19 Lacs

Bengaluru

Work from Office

Job Summary Person at this position takes ownership of a module and associated quality and delivery. Person at this position provides instructions, guidance and advice to team members to ensure quality and on time delivery. Person at this position is expected to be able to instruct and review the quality of work done by technical staff. Person at this position should be able to identify key issues and challenges by themselves, prioritize the tasks and deliver results with minimal direction and supervision. Person at this position has the ability to investigate the root cause of the problem and come up alternatives/ solutions based on sound technical foundation gained through in-depth knowledge of technology, standards, tools and processes. Person has the ability to organize and draw connections among ideas and distinguish between those which are implementable. Person demonstrates a degree of flexibility in resolving problems/ issues that atleast to in-depth command of all techniques, processes, tools and standards within the relevant field of specialisation. Roles & Responsibilities Responsible for requirement analysis and feasibility study including system level work estimation while considering risk identification and mitigation. Responsible for design, coding, testing, bug fixing, documentation and technical support in the assigned area. Responsible for on time delivery while adhering to quality and productivity goals. Responsible for traceability of the requirements from design to delivery Code optimization and coverage. Responsible for conducting reviews, identifying risks and ownership of quality of deliverables. Responsible for identifying training needs of the team. Expected to enhance technical capabilities by attending trainings, self-study and periodic technical assessments. Expected to participate in technical initiatives related to project and organization and deliver training as per plan and quality. Expected to be a technical mentor for junior members. Person may be given additional responsibility of managing people based on discretion of Project Manager. Education and Experience Required Engineering graduate, MCA, etc Experience: 5-8 years Competencies Description Data engineering TCB is applicable to one who 1) Creates databases and storage for relational and non-relational data sources 2) Develops data pipelines (ETL/ ELT) to clean , transform and merge data sources into usable format 3) Creates reporting layer with pre-packaged scheduled reports , Dashboards and Charts for self-service BI 4) Has experience on cloud platforms such as AWS, Azure , GCP in implementing data workflows 5) Experience with tools like MongoDB, Hive, Hbase, Spark, Tableau, PowerBI, Python, Scala, SQL, ElasticSearch etc. Platforms- AWS, Azure , GCP Technology Standard- NA Tools- MongoDB, Hive, Hbase, Tableau, PowerBI, ElasticSearch, Qlikview Languages- Python, R, Spark,Scala, SQL Specialization- DWH, BIG DATA ENGINEERING, EDGE ANALYTICS

Posted 2 months ago

Apply

5.0 - 9.0 years

13 - 18 Lacs

Hyderabad

Work from Office

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. We are seeking a highly skilled and experienced Technical Delivery Lead to join our team for a Cloud Data Modernization project. The successful candidate will be responsible for managing and leading the migration of an on-premises Enterprise Data Warehouse (SQLServer) to a modern cloud-based data platform utilizing Azure Cloud data tools and Snowflake. This platform will enable offshore (non-US) resources to build and develop Reporting, Analytics, and Data Science solutions. Primary Responsibilities Manage and lead the migration of the on-premises SQLServer Enterprise Data Warehouse to Azure Cloud and Snowflake Design, develop, and implement data platform solutions using Azure Data Factory (ADF), Self-hosted Integration Runtime (SHIR), Logic Apps, Azure Data Lake Storage Gen2 (ADLS Gen2), Blob Storage, Databricks, and Snowflake Manage and guide the development of cloud-native ETLs and data pipelines using modern technologies on Azure Cloud, Databricks, and Snowflake Implement and oversee DevOps practices and CI/CD pipelines using GitActions Collaborate with cross-functional teams to ensure seamless integration and data flow Optimize and troubleshoot data pipelines and workflows Ensure data security and compliance with industry standards Provide technical leadership and mentorship to the engineering team Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications 8+ years of experience in a Cloud Data Engineering role, with 3+ years in a leadership or technical delivery role Hands-on experience with Azure Cloud data tools (ADF, SHIR, Logic Apps, ADLS Gen2, Blob Storage), Databricks, and Snowflake Experience with Python or other scripting languages for data processing Experience with Agile methodologies and project management tools Solid experience in developing cloud-native ETLs and data pipelines using modern technologies on Azure Cloud, Databricks, and Snowflake Proficiency in DevOps and CI/CD practices using GitActions Proven excellent problem-solving skills and ability to work independently Proven solid communication and collaboration skills. Solid analytical skills and attention to detail Proven track record of successful project delivery in a cloud environment Preferred Qualifications Certification in Azure or Snowflake Experience working with automated ETL conversion tools used during cloud migrations (SnowConvert, BladeBridge, etc.) Experience with data modeling and database design Knowledge of data governance and data quality best practices Familiarity with other cloud platforms (e.g., AWS, Google Cloud)

Posted 2 months ago

Apply

4.0 - 8.0 years

12 - 17 Lacs

Hyderabad

Work from Office

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by diversity and inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health equity on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities Analyzes and investigates Provides explanations and interpretations within area of expertise Participate in scrum process and deliver stories/features according to the schedule Collaborate with team, architects and product stakeholders to understand the scope and design of a deliverable Participate in product support activities as needed by the team. Understand product architecture, features being built and come up with product improvement ideas and POCs Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Undergraduate degree or equivalent experience Proven experience using Bigdata tech stack Sound knowledge on Java and Spring framework with good exposure to Spring Batch, Spring Data, Spring Web services, Python Proficient with Bigdata ecosystem (Sqoop, Spark, Hadoop, Hive, HBase) Proficient with Unix/Linux eco systems and shell scripting skills Proven Java, Kafka, Spark, Big Data, Azure ,analytical and problem solving skills Proven solid analytical and communication skills At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes — an enterprise priority reflected in our mission.

Posted 2 months ago

Apply

3.0 - 7.0 years

6 - 10 Lacs

Bengaluru

Work from Office

Overall Responsibilities: Data Pipeline Development: Design, develop, and maintain highly scalable and optimized ETL pipelines using PySpark on the Cloudera Data Platform, ensuring data integrity and accuracy. Data Ingestion: Implement and manage data ingestion processes from a variety of sources (e.g., relational databases, APIs, file systems) to the data lake or data warehouse on CDP. Data Transformation and Processing: Use PySpark to process, cleanse, and transform large datasets into meaningful formats that support analytical needs and business requirements. Performance Optimization: Conduct performance tuning of PySpark code and Cloudera components, optimizing resource utilization and reducing runtime of ETL processes. Data Quality and Validation: Implement data quality checks, monitoring, and validation routines to ensure data accuracy and reliability throughout the pipeline. Automation and Orchestration: Automate data workflows using tools like Apache Oozie, Airflow, or similar orchestration tools within the Cloudera ecosystem. Monitoring and Maintenance: Monitor pipeline performance, troubleshoot issues, and perform routine maintenance on the Cloudera Data Platform and associated data processes. Collaboration: Work closely with other data engineers, analysts, product managers, and other stakeholders to understand data requirements and support various data-driven initiatives. Documentation: Maintain thorough documentation of data engineering processes, code, and pipeline configurations. Software Requirements: Advanced proficiency in PySpark, including working with RDDs, DataFrames, and optimization techniques. Strong experience with Cloudera Data Platform (CDP) components, including Cloudera Manager, Hive, Impala, HDFS, and HBase. Knowledge of data warehousing concepts, ETL best practices, and experience with SQL-based tools (e.g., Hive, Impala). Familiarity with Hadoop, Kafka, and other distributed computing tools. Experience with Apache Oozie, Airflow, or similar orchestration frameworks. Strong scripting skills in Linux. Category-wise Technical Skills: PySpark: Advanced proficiency in PySpark, including working with RDDs, DataFrames, and optimization techniques. Cloudera Data Platform: Strong experience with Cloudera Data Platform (CDP) components, including Cloudera Manager, Hive, Impala, HDFS, and HBase. Data Warehousing: Knowledge of data warehousing concepts, ETL best practices, and experience with SQL-based tools (e.g., Hive, Impala). Big Data Technologies: Familiarity with Hadoop, Kafka, and other distributed computing tools. Orchestration and Scheduling: Experience with Apache Oozie, Airflow, or similar orchestration frameworks. Scripting and Automation: Strong scripting skills in Linux. Experience: 3+ years of experience as a Data Engineer, with a strong focus on PySpark and the Cloudera Data Platform. Proven track record of implementing data engineering best practices. Experience in data ingestion, transformation, and optimization on the Cloudera Data Platform. Day-to-Day Activities: Design, develop, and maintain ETL pipelines using PySpark on CDP. Implement and manage data ingestion processes from various sources. Process, cleanse, and transform large datasets using PySpark. Conduct performance tuning and optimization of ETL processes. Implement data quality checks and validation routines. Automate data workflows using orchestration tools. Monitor pipeline performance and troubleshoot issues. Collaborate with team members to understand data requirements. Maintain documentation of data engineering processes and configurations. Qualifications: Bachelor’s or Master’s degree in Computer Science, Data Engineering, Information Systems, or a related field. Relevant certifications in PySpark and Cloudera technologies are a plus. Soft Skills: Strong analytical and problem-solving skills. Excellent verbal and written communication abilities. Ability to work independently and collaboratively in a team environment. Attention to detail and commitment to data quality.

Posted 2 months ago

Apply

6.0 - 9.0 years

32 - 35 Lacs

Noida, Kolkata, Chennai

Work from Office

Dear Candidate, We are hiring a Lua Developer to create lightweight scripting layers in games, embedded systems, or automation tools. Key Responsibilities: Develop scripts and integrations using Lua Embed Lua in C/C++ applications for extensibility Write custom modules or bindings for game engines or IoT devices Optimize Lua code for memory and execution time Integrate with APIs, data sources, or hardware systems Required Skills & Qualifications: Proficient in Lua and its integration with host languages Experience with Love2D , Corona SDK , or custom engines Familiarity with C/C++ , embedded Linux , or IoT Bonus: Game scripting or automation experience Soft Skills: Strong troubleshooting and problem-solving skills. Ability to work independently and in a team. Excellent communication and documentation skills. Note: If interested, please share your updated resume and preferred time for a discussion. If shortlisted, our HR team will contact you. Srinivasa Reddy Kandi Delivery Manager Integra Technologies

Posted 2 months ago

Apply

2.0 - 5.0 years

15 - 19 Lacs

Mumbai

Work from Office

Overview The Data Technology team at MSCI is responsible for meeting the data requirements across various business areas, including Index, Analytics, and Sustainability. Our team collates data from multiple sources such as vendors (e.g., Bloomberg, Reuters), website acquisitions, and web scraping (e.g., financial news sites, company websites, exchange websites, filings). This data can be in structured or semi-structured formats. We normalize the data, perform quality checks, assign internal identifiers, and release it to downstream applications. Responsibilities As data engineers, we build scalable systems to process data in various formats and volumes, ranging from megabytes to terabytes. Our systems perform quality checks, match data across various sources, and release it in multiple formats. We leverage the latest technologies, sources, and tools to process the data. Some of the exciting technologies we work with include Snowflake, Databricks, and Apache Spark. Qualifications Core Java, Spring Boot, Apache Spark, Spring Batch, Python. Exposure to sql databases like Oracle, Mysql, Microsoft Sql is a must. Any experience/knowledge/certification on Cloud technology preferrably Microsoft Azure or Google cloud platform is good to have. Exposures to non sql databases like Neo4j or Document database is again good to have. What we offer you Transparent compensation schemes and comprehensive employee benefits, tailored to your location, ensuring your financial security, health, and overall wellbeing. Flexible working arrangements, advanced technology, and collaborative workspaces. A culture of high performance and innovation where we experiment with new ideas and take responsibility for achieving results. A global network of talented colleagues, who inspire, support, and share their expertise to innovate and deliver for our clients. Global Orientation program to kickstart your journey, followed by access to our Learning@MSCI platform, LinkedIn Learning Pro and tailored learning opportunities for ongoing skills development. Multi-directional career paths that offer professional growth and development through new challenges, internal mobility and expanded roles. We actively nurture an environment that builds a sense of inclusion belonging and connection, including eight Employee Resource Groups. All Abilities, Asian Support Network, Black Leadership Network, Climate Action Network, Hola! MSCI, Pride & Allies, Women in Tech, and Women’s Leadership Forum. At MSCI we are passionate about what we do, and we are inspired by our purpose – to power better investment decisions. You’ll be part of an industry-leading network of creative, curious, and entrepreneurial pioneers. This is a space where you can challenge yourself, set new standards and perform beyond expectations for yourself, our clients, and our industry. MSCI is a leading provider of critical decision support tools and services for the global investment community. With over 50 years of expertise in research, data, and technology, we power better investment decisions by enabling clients to understand and analyze key drivers of risk and return and confidently build more effective portfolios. We create industry-leading research-enhanced solutions that clients use to gain insight into and improve transparency across the investment process. MSCI Inc. is an equal opportunity employer. It is the policy of the firm to ensure equal employment opportunity without discrimination or harassment on the basis of race, color, religion, creed, age, sex, gender, gender identity, sexual orientation, national origin, citizenship, disability, marital and civil partnership/union status, pregnancy (including unlawful discrimination on the basis of a legally protected parental leave), veteran status, or any other characteristic protected by law. MSCI is also committed to working with and providing reasonable accommodations to individuals with disabilities. If you are an individual with a disability and would like to request a reasonable accommodation for any part of the application process, please email Disability.Assistance@msci.com and indicate the specifics of the assistance needed. Please note, this e-mail is intended only for individuals who are requesting a reasonable workplace accommodation; it is not intended for other inquiries. To all recruitment agencies MSCI does not accept unsolicited CVs/Resumes. Please do not forward CVs/Resumes to any MSCI employee, location, or website. MSCI is not responsible for any fees related to unsolicited CVs/Resumes. Note on recruitment scams We are aware of recruitment scams where fraudsters impersonating MSCI personnel may try and elicit personal information from job seekers. Read our full note on careers.msci.com

Posted 2 months ago

Apply

3.0 - 5.0 years

5 - 7 Lacs

Noida

Work from Office

About the role : - You will spend time in ensuring the products have best technical design and architecture; you would be supported by peers and team members in creating best-in-class technical solutions. - Identify technical challenges proactively and provide effective solutions to overcome them, ensuring the successful implementation of features and functionality. - Quickly respond to business needs and client facing teams demand for features, enhancements and bug fixes. - Work with senior Ripik.AI tech and AI leaders in shaping and scaling the software products and Ripiks proprietary platform for hosting manufacturing focussed AI and ML software products Required Skills & Experience : - You should have 3+ years of experience, with deep expertise in Java, Golang & Python. - Must have: Expert in coding for business logic, server scripts and application programming interfaces (APIs) - Excellent in writing optimal SQL queries for backend databases; CRUD operations for databases from applications. - Exposure to relational databases : MYSQL, Postgres DB, non-relational: MongoDB, Graph based databases, HBASE, Cloud native big data stores; willing to learn and ramp up on multiple database technologies . - Must have at least 1 public cloud platform experience (GCP/Azure/AWS; GCP preferred). - Good to have: Basic knowledge of Advanced Analytics / Machine learning / Artificial intelligence (has to collaborate with ML engineers to build backend of AI-enabled apps)

Posted 2 months ago

Apply

3.0 - 5.0 years

15 - 20 Lacs

Bengaluru

Work from Office

locationsIndia, Bangalore time typeFull time posted onPosted 12 Days Ago job requisition idJR0273871 Job Details: About The Role : About the Role: Join our innovative and inclusive Logic Technology Development team as a TD AI and Analytics Engineer, where diverse talents come together to push the boundaries of semiconductor technology. You will have the opportunity to work in one of the world's most advanced cleanroom facilities, designing, executing, and analyzing experiments to meet engineering specifications for our cutting-edge processes. This role offers a unique chance to learn and operate a manufacturing line, integrating the many individual steps necessary for the production of complex microprocessors. What We Offer: We are dedicated to creating a collaborative, supportive, and exciting environment where diverse perspectives drive exceptional results. At Intel, you will have the opportunity to transform technology and contribute to a better future by delivering innovative products. Learn more about Intel Corporation's Core Values here. Benefits: We offer a comprehensive benefits package designed to support a healthy and fulfilling life. This includes excellent medical plans, wellness programs, recreational activities, generous time off, discounts on various products and services, and many more creative rewards that make Intel a great place to work. Discover more about our amazing benefits here. About the Logic Technology Development (LTD) TD Intel Foundry AI and Analytics Innovation Organization: Intel Foundry TD's AI and Analytics Innovation office is committed to providing a competitive advantage through End-to-End AI and Analytics Solutions, driving Intel's ambitious IDM2.0 goals. Our team is seeking an engineer with a background in Data Engineering, Software Engineering, or Data Science to support and develop modern AI/ML solutions. Explore what life is like inside Intel here. Key Responsibilities: As an Engineer in the TD AI office, you will collaborate with Intel's factory automation organization and Foundry TD's functional areas to support and develop modern AI/ML solutions. Your primary responsibilities will include. Developing software and data engineering solutions for in-house AI/ML products. Enhancing existing ML platforms and devising MLOps capabilities. Understanding existing data structures in factory automation systems and building data pipelines connecting different systems. Testing and supporting full-stack big data engineering systems. Developing data ingestion pipelines, data access APIs, and services, monitoring and maintaining deployment environments and platforms, creating technical documentation, and collaborating with peers/engineering teams to streamline solution development, validation, and deployment. Managing factory big data interaction with cloud environments, ORACLE, SQL, Python, Software architecture, and MLOps. Interfacing with process and integration functional area analytics teams and customers using advanced automated process control systems. Qualifications: Minimum Qualifications: Master's or PhD degree in Computer Science, Computer Engineering, or a related Science/Engineering discipline. 3+ years of experience in data engineering/software development and knowledge in Spark, NiFi, Hadoop, HBase, S3 object storage, Kubernetes, REST APIs, and services. Intermediate to advanced English proficiency (both verbal and written). Preferred Qualifications: 2+ years in data analytics and machine learning (Python, R, JMP, etc.) and relational databases (SQL). 2+ years in a technical leadership role. 3+ months of working knowledge with CI/CD (Continuous Integration/Continuous Deployment) and proficiency with GitHub and GitHub Actions. Prior interaction with factory automation systems. Application Process :By applying to this posting, your resume and profile will become visible to Intel recruiters, allowing them to consider you for current and future job openings aligned with the skills and positions mentioned above. We are constantly working towards a more connected and intelligent future, and we need your help. Change tomorrow. Start today. Job Type: Experienced Hire Shift: Shift 1 (India) Primary Location: India, Bangalore Additional Locations: Business group: As the world's largest chip manufacturer, Intel strives to make every facet of semiconductor manufacturing state-of-the-art -- from semiconductor process development and manufacturing, through yield improvement to packaging, final test and optimization, and world class Supply Chain and facilities support. Employees in theTechnology Development and Manufacturing Groupare part of a worldwide network of design, development, manufacturing, and assembly/test facilities, all focused on utilizing the power of Moores Law to bring smart, connected devices to every person on Earth. Posting Statement: All qualified applicants will receive consideration for employment without regard to race, color, religion, religious creed, sex, national origin, ancestry, age, physical or mental disability, medical condition, genetic information, military and veteran status, marital status, pregnancy, gender, gender expression, gender identity, sexual orientation, or any other characteristic protected by local law, regulation, or ordinance. Position of Trust N/A Work Model for this Role This role will be eligible for our hybrid work model which allows employees to split their time between working on-site at their assigned Intel site and off-site. *

Posted 2 months ago

Apply

6.0 - 7.0 years

12 - 17 Lacs

Mumbai

Work from Office

ole Description : As a SCALA Tech Lead, you will be a technical leader and mentor, guiding your team to deliver robust and scalable solutions. You will be responsible for setting technical direction, ensuring code quality, and fostering a collaborative and productive team environment. Your expertise in SCALA and your ability to translate business requirements into technical solutions will be crucial for delivering successful projects. Responsibilities : - Understand and implement tactical or strategic solutions for given business problems. - Discuss business needs and technology requirements with stakeholders. - Define and derive strategic solutions and identify tactical solutions when necessary. - Write technical design and other solution documents per Agile (SCRUM) standards. - Perform data analysis to aid development work and other business needs. - Develop high-quality SCALA code that meets business requirements. - Perform unit testing of developed code using automated BDD test frameworks. - Participate in testing efforts to validate and approve technology solutions. - Follow MS standards for the adoption of automated release processes across environments. - Perform automated regression test case suites and support UAT of developed solutions. - Work using collaborative techniques with other FCT (Functional Core Technology) and NFRT (Non-Functional Requirements Team) teams. - Communicate effectively with stakeholders and team members. - Provide technical guidance and mentorship to team members. - Identify opportunities for process improvements and implement effective solutions. - Drive continuous improvement in code quality, development processes, and team performance. - Participate in post-mortem reviews and implement lessons learned. Qualifications : Experience : - [Number] years of experience in software development, with a focus on SCALA. - Proven experience in leading and mentoring software development teams. - Experience in designing and implementing complex SCALA-based solutions. - Strong proficiency in SCALA programming language. - Experience with functional programming concepts and libraries. - Knowledge of distributed systems and data processing technologies. - Experience with automated testing frameworks (BDD). - Familiarity with Agile (SCRUM) methodologies. - Experience with CI/CD pipelines and DevOps practices. - Understanding of data analysis and database technologies.

Posted 2 months ago

Apply

1.0 - 4.0 years

1 - 5 Lacs

Mumbai

Work from Office

Location Mumbai Role Overview : As a Big Data Engineer, you'll design and build robust data pipelines on Cloudera using Spark (Scala/PySpark) for ingestion, transformation, and processing of high-volume data from banking systems. Key Responsibilities : Build scalable batch and real-time ETL pipelines using Spark and Hive Integrate structured and unstructured data sources Perform performance tuning and code optimization Support orchestration and job scheduling (NiFi, Airflow) Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Skills Required : Proficiency in PySpark/Scala with Hive/Impala Experience with data partitioning, bucketing, and optimization Familiarity with Kafka, Iceberg, NiFi is a must Knowledge of banking or financial datasets is a plus

Posted 2 months ago

Apply

2.0 - 5.0 years

14 - 17 Lacs

Hyderabad

Work from Office

As an Application Developer, you will lead IBM into the future by translating system requirements into the design and development of customized systems in an agile environment. The success of IBM is in your hands as you transform vital business needs into code and drive innovation. Your work will power IBM and its clients globally, collaborating and integrating code into enterprise systems. You will have access to the latest education, tools and technology, and a limitless career path with the world’s technology leader. Come to IBM and make a global impact Responsibilities: Responsible to manage end to end feature development and resolve challenges faced in implementing the same Learn new technologies and implement the same in feature development within the time frame provided Manage debugging, finding root cause analysis and fixing the issues reported on Content Management back end software system fixing the issues reported on Content Management back end software system Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Overall, more than 6 years of experience with more than 4+ years of Strong Hands on experience in Python and Spark Strong technical abilities to understand, design, write and debug to develop applications on Python and Pyspark. Good to Have;- Hands on Experience on cloud technology AWS/GCP/Azure strong problem-solving skill Preferred technical and professional experience Good to Have;- Hands on Experience on cloud technology AWS/GCP/Azure

Posted 2 months ago

Apply

2.0 - 5.0 years

14 - 17 Lacs

Mumbai

Work from Office

As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs. Your primary responsibilities include Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements. Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization. Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too, Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Must have 5+ years exp in Big Data -Hadoop Spark -Scala ,Python Hbase, Hive Good to have Aws -S3, athena ,Dynomo DB, Lambda, Jenkins GIT Developed Python and pyspark programs for data analysis. Good working experience with python to develop Custom Framework for generating of rules (just like rules engine). Developed Python code to gather the data from HBase and designs the solution to implement using Pyspark. Apache Spark DataFrames/RDD's were used to apply business transformations and utilized Hive Context objects to perform read/write operations, Preferred technical and professional experience Understanding of Devops. Experience in building scalable end-to-end data ingestion and processing solutions Experience with object-oriented and/or functional programming languages, such as Python, Java and Scala

Posted 2 months ago

Apply

5.0 - 8.0 years

4 - 8 Lacs

Pune

Work from Office

Role Purpose The purpose of the role is to support process delivery by ensuring daily performance of the Production Specialists, resolve technical escalations and develop technical capability within the Production Specialists. Do Oversee and support process by reviewing daily transactions on performance parameters Review performance dashboard and the scores for the team Support the team in improving performance parameters by providing technical support and process guidance Record, track, and document all queries received, problem-solving steps taken and total successful and unsuccessful resolutions Ensure standard processes and procedures are followed to resolve all client queries Resolve client queries as per the SLA’s defined in the contract Develop understanding of process/ product for the team members to facilitate better client interaction and troubleshooting Document and analyze call logs to spot most occurring trends to prevent future problems Identify red flags and escalate serious client issues to Team leader in cases of untimely resolution Ensure all product information and disclosures are given to clients before and after the call/email requests Avoids legal challenges by monitoring compliance with service agreements Handle technical escalations through effective diagnosis and troubleshooting of client queries Manage and resolve technical roadblocks/ escalations as per SLA and quality requirements If unable to resolve the issues, timely escalate the issues to TA & SES Provide product support and resolution to clients by performing a question diagnosis while guiding users through step-by-step solutions Troubleshoot all client queries in a user-friendly, courteous and professional manner Offer alternative solutions to clients (where appropriate) with the objective of retaining customers’ and clients’ business Organize ideas and effectively communicate oral messages appropriate to listeners and situations Follow up and make scheduled call backs to customers to record feedback and ensure compliance to contract SLA’s Build people capability to ensure operational excellence and maintain superior customer service levels of the existing account/client Mentor and guide Production Specialists on improving technical knowledge Collate trainings to be conducted as triage to bridge the skill gaps identified through interviews with the Production Specialist Develop and conduct trainings (Triages) within products for production specialist as per target Inform client about the triages being conducted Undertake product trainings to stay current with product features, changes and updates Enroll in product specific and any other trainings per client requirements/recommendations Identify and document most common problems and recommend appropriate resolutions to the team Update job knowledge by participating in self learning opportunities and maintaining personal networks Deliver NoPerformance ParameterMeasure1ProcessNo. of cases resolved per day, compliance to process and quality standards, meeting process level SLAs, Pulse score, Customer feedback, NSAT/ ESAT2Team ManagementProductivity, efficiency, absenteeism3Capability developmentTriages completed, Technical Test performance Mandatory Skills: Hadoop. Experience5-8 Years.

Posted 2 months ago

Apply

5.0 - 10.0 years

1 - 5 Lacs

Bengaluru

Work from Office

Job Title:AWS Data Engineer Experience5-10 Years Location:Bangalore : Technical Skills: 5 + Years of experience as AWS Data Engineer, AWS S3, Glue Catalog, Glue Crawler, Glue ETL, Athena write Glue ETLs to convert data in AWS RDS for SQL Server and Oracle DB to Parquet format in S3 Execute Glue crawlers to catalog S3 files. Create catalog of S3 files for easier querying Create SQL queries in Athena Define data lifecycle management for S3 files Strong experience in developing, debugging, and optimizing Glue ETL jobs using PySpark or Glue Studio. Ability to connect Glue ETLs with AWS RDS (SQL Server and Oracle) for data extraction and write transformed data into Parquet format in S3. Proficiency in setting up and managing Glue Crawlers to catalog data in S3. Deep understanding of S3 architecture and best practices for storing large datasets. Experience in partitioning and organizing data for efficient querying in S3. Knowledge of Parquet file format advantages for optimized storage and querying. Expertise in creating and managing the AWS Glue Data Catalog to enable structured and schema-aware querying of data in S3. Experience with Amazon Athena for writing complex SQL queries and optimizing query performance. Familiarity with creating views or transformations in Athena for business use cases. Knowledge of securing data in S3 using IAM policies, S3 bucket policies, and KMS encryption. Understanding of regulatory requirements (e.g., GDPR) and implementing secure data handling practices. Non-Technical Skills: Candidate needs to be Good Team Player Effective interpersonal, team building and communication skills. Ability to communicate complex technology to no tech audience in simple and precise manner.

Posted 2 months ago

Apply

3.0 - 6.0 years

9 - 14 Lacs

Mumbai

Work from Office

Role Overview : We are looking for aTalend Data Catalog Specialistto drive enterprise data governance initiatives by implementingTalend Data Catalogand integrating it withApache Atlasfor unified metadata management within a Cloudera-based data lakehouse. The role involves establishing metadata lineage, glossary harmonization, and governance policies to enhance trust, discovery, and compliance across the data ecosystem Key Responsibilities: o Set up and configure Talend Data Catalog to ingest and manage metadata from source systems, data lake (HDFS), Iceberg tables, Hive metastore, and external data sources. o Develop and maintain business glossaries , data classifications, and metadata models. o Design and implement bi-directional integration between Talend Data Catalog and Apache Atlas to enable metadata synchronization , lineage capture, and policy alignment across the Cloudera stack. o Map technical metadata from Hive/Impala to business metadata defined in Talend. o Capture end-to-end lineage of data pipelines (e.g., from ingestion in PySpark to consumption in BI tools) using Talend and Atlas. o Provide impact analysis for schema changes, data transformations, and governance rule enforcement. o Support definition and rollout of enterprise data governance policies (e.g., ownership, stewardship, access control). o Enable role-based metadata access , tagging, and data sensitivity classification. o Work with data owners, stewards, and architects to ensure data assets are well-documented, governed, and discoverable. o Provide training to users on leveraging the catalog for search, understanding, and reuse. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise 6–12 years in data governance or metadata management, with at least 2–3 years in Talend Data Catalog. Talend Data Catalog, Apache Atlas, Cloudera CDP, Hive/Impala, Spark, HDFS, SQL. Business glossary, metadata enrichment, lineage tracking, stewardship workflows. Hands-on experience in Talend–Atlas integration , either through REST APIs, Kafka hooks, or metadata bridges. Preferred technical and professional experience .

Posted 2 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies