Jobs
Interviews

627 Mapreduce Jobs - Page 23

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

8.0 - 10.0 years

10 - 12 Lacs

Mumbai, Delhi / NCR, Bengaluru

Work from Office

Big Data Engineer (Remote, Contract 6 Months+) ig Data ecosystem including Snowflake (Snowpark), Spark, MapReduce, Hadoop, and more. We are looking for a Senior Big Data Engineer with deep expertise in large-scale data processing technologies and frameworks. This is a remote, contract-based position suited for a data engineering expert with strong experience in the Big Data ecosystem including Snowflake (Snowpark), Spark, MapReduce, Hadoop, and more. #KeyResponsibilities Design, develop, and maintain scalable data pipelines and big data solutions. Implement data transformations using Spark, Snowflake (Snowpark), Pig, and Sqoop. Process large data volumes from diverse sources using Hadoop ecosystem tools. Build end-to-end data workflows for batch and streaming pipelines. Optimize data storage and retrieval processes in HBase, Hive, and other NoSQL databases. Collaborate with data scientists and business stakeholders to design robust data infrastructure. Ensure data integrity, consistency, and security in line with organizational policies. Troubleshoot and tune performance for distributed systems and applications. #MustHaveSkills in Data Engineering / Big Data Tools: Snowflake (Snowpark), Spark, MapReduce, Hadoop, Sqoop, Pig, HBase Data Ingestion & ETL, Data Pipeline Design, Distributed Computing Strong understanding of Big Data architectures & performance tuning Hands-on experience with large-scale data storage and query optimization #NiceToHave Apache Airflow / Oozie experience Knowledge of cloud platforms (AWS, Azure, or GCP) Proficiency in Python or Scala CI/CD and DevOps exposure #ContractDetails Role: Senior Big Data Engineer Locations : Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune, Remote Duration: 6+ Months (Contract) Apply via Email: navaneeta@suzva.com Contact: 9032956160 #HowToApply Send your updated resume with the subject: "Application for Remote Big Data Engineer Contract Role" Include in your email: Updated Resume Current CTC Expected CTC Current Location Notice Period / Availabilit

Posted 2 months ago

Apply

5.0 - 10.0 years

7 - 12 Lacs

Pune

Work from Office

Role: At Simplify Healthcare AI, we are focused on building industry-focused AI models. We are seeking a highly skilled and passionate Senior Data Scientist to join our dynamic team. In this role, you will be at the forefront of designing, developing, and implementing advanced machine-learning models that will have a direct impact on our products and offerings. Responsibilities: Data Preprocessing : Organize, categorize, and prep data for processing. Identify incomplete, incorrect, inaccurate or irrelevant portions of data and interpret, cleanse or replace/delete them as needed. Advanced Data Analysis : Utilize advanced statistical techniques to analyze complex, large sets of data, and extract actionable insights that will help drive decision-making processes across various functions of the business. Model Development : Develop, refine, and implement machine learning algorithms and statistical models that will help create data-driven solutions to business problems. Also responsible for validating and testing these models to ensure accuracy. Collaboration : Work closely with the team of engineers and product designers to understand technical and business requirements, convert these to analytic solutions, and integrate them into our product suite. Insight-Driven Business Decisions : Utilize and translate complex results from the data analyses and models into strategic recommendations for both technical and non-technical stakeholders. Presentations : Represent the data science team in cross-functional teams to ensure correct and impactful data science principles are incorporated. Effectively present complex statistical concepts and results, as well as their implications, to a non-technical audience. Mentoring and Leadership : Provide mentorship and guidance to junior data scientists in the team, educate them on best practices, and guide them in their career paths. Required Skills: Minimum of 5 (Five) years of working experience as a data scientist. Strong knowledge of machine learning models, data mining, and databases. Experience with programming languages such as Python, SQL, etc. Knowledge of distributed data/computing tools like MapReduce, Hadoop, Hive, and Spark. Exceptional problem-solving skills, attention to detail, and a team player. Strong communication skills, with the ability to present complex technical findings to non-technical stakeholders. Experience in mentoring junior data scientists or leading a data science team is preferred.

Posted 2 months ago

Apply

11.0 - 15.0 years

50 - 100 Lacs

Hyderabad

Work from Office

Uber is looking for Staff Software Engineer - Data to join our dynamic team and embark on a rewarding career journey Liaising with coworkers and clients to elucidate the requirements for each task. Conceptualizing and generating infrastructure that allows big data to be accessed and analyzed. Reformulating existing frameworks to optimize their functioning. Testing such structures to ensure that they are fit for use. Preparing raw data for manipulation by data scientists. Detecting and correcting errors in your work. Ensuring that your work remains backed up and readily accessible to relevant coworkers. Remaining up-to-date with industry standards and technological advancements that will improve the quality of your outputs.

Posted 2 months ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Description YOUR IMPACT Are you passionate about developing mission-critical, high quality software solutions, using cutting-edge technology, in a dynamic environment? OUR IMPACT We are Compliance Engineering, a global team of more than 300 engineers and scientists who work on the most complex, mission-critical problems. We build and operate a suite of platforms and applications that prevent, detect, and mitigate regulatory and reputational risk across the firm. have access to the latest technology and to massive amounts of structured and unstructured data. leverage modern frameworks to build responsive and intuitive UX/UI and Big Data applications. Compliance Engi neering is looking to fill several big data software engineering roles Your first deliverable and success criteria will be the deployment, in 2025, of new complex data pipelines and surveillance models to detect inappropriate trading activity. How You Will Fulfill Your Potential As a member of our team, you will: partner globally with sponsors, users and engineering colleagues across multiple divisions to create end-to-end solutions, learn from experts, leverage various technologies including; Java, Spark, Hadoop, Flink, MapReduce, HBase, JSON, Protobuf, Presto, Elastic Search, Kafka, Kubernetes be able to innovate and incubate new ideas, have an opportunity to work on a broad range of problems, including negotiating data contracts, capturing data quality metrics, processing large scale data, building surveillance detection models, be involved in the full life cycle; defining, designing, implementing, testing, deploying, and maintaining software systems across our products. Qualifications A successful candidate will possess the following attributes: A Bachelor's or Master's degree in Computer Science, Computer Engineering, or a similar field of study. Expertise in java, as well as proficiency with databases and data manipulation. Experience in end-to-end solutions, automated testing and SDLC concepts. The ability (and tenacity) to clearly express ideas and arguments in meetings and on paper. Experience in the some of following is desired and can set you apart from other candidates : developing in large-scale systems, such as MapReduce on Hadoop/Hbase, data analysis using tools such as SQL, Spark SQL, Zeppelin/Jupyter, API design, such as to create interconnected services, knowledge of the financial industry and compliance or risk functions, ability to influence stakeholders. About Goldman Sachs Goldman Sachs is a leading global investment banking, securities and investment management firm that provides a wide range of financial services to a substantial and diversified client base that includes corporations, financial institutions, governments and individuals. Founded in 1869, the firm is headquartered in New York and maintains offices in all major financial centers around the world. Show more Show less

Posted 2 months ago

Apply

4.0 - 8.0 years

12 - 17 Lacs

Hyderabad

Work from Office

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by diversity and inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health equity on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities Analyzes and investigates Provides explanations and interpretations within area of expertise Participate in scrum process and deliver stories/features according to the schedule Collaborate with team, architects and product stakeholders to understand the scope and design of a deliverable Participate in product support activities as needed by the team. Understand product architecture, features being built and come up with product improvement ideas and POCs Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Undergraduate degree or equivalent experience Proven experience using Bigdata tech stack Sound knowledge on Java and Spring framework with good exposure to Spring Batch, Spring Data, Spring Web services, Python Proficient with Bigdata ecosystem (Sqoop, Spark, Hadoop, Hive, HBase) Proficient with Unix/Linux eco systems and shell scripting skills Proven Java, Kafka, Spark, Big Data, Azure ,analytical and problem solving skills Proven solid analytical and communication skills At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes — an enterprise priority reflected in our mission.

Posted 2 months ago

Apply

3.0 - 7.0 years

6 - 10 Lacs

Bengaluru

Work from Office

Overall Responsibilities: Data Pipeline Development: Design, develop, and maintain highly scalable and optimized ETL pipelines using PySpark on the Cloudera Data Platform, ensuring data integrity and accuracy. Data Ingestion: Implement and manage data ingestion processes from a variety of sources (e.g., relational databases, APIs, file systems) to the data lake or data warehouse on CDP. Data Transformation and Processing: Use PySpark to process, cleanse, and transform large datasets into meaningful formats that support analytical needs and business requirements. Performance Optimization: Conduct performance tuning of PySpark code and Cloudera components, optimizing resource utilization and reducing runtime of ETL processes. Data Quality and Validation: Implement data quality checks, monitoring, and validation routines to ensure data accuracy and reliability throughout the pipeline. Automation and Orchestration: Automate data workflows using tools like Apache Oozie, Airflow, or similar orchestration tools within the Cloudera ecosystem. Monitoring and Maintenance: Monitor pipeline performance, troubleshoot issues, and perform routine maintenance on the Cloudera Data Platform and associated data processes. Collaboration: Work closely with other data engineers, analysts, product managers, and other stakeholders to understand data requirements and support various data-driven initiatives. Documentation: Maintain thorough documentation of data engineering processes, code, and pipeline configurations. Software Requirements: Advanced proficiency in PySpark, including working with RDDs, DataFrames, and optimization techniques. Strong experience with Cloudera Data Platform (CDP) components, including Cloudera Manager, Hive, Impala, HDFS, and HBase. Knowledge of data warehousing concepts, ETL best practices, and experience with SQL-based tools (e.g., Hive, Impala). Familiarity with Hadoop, Kafka, and other distributed computing tools. Experience with Apache Oozie, Airflow, or similar orchestration frameworks. Strong scripting skills in Linux. Category-wise Technical Skills: PySpark: Advanced proficiency in PySpark, including working with RDDs, DataFrames, and optimization techniques. Cloudera Data Platform: Strong experience with Cloudera Data Platform (CDP) components, including Cloudera Manager, Hive, Impala, HDFS, and HBase. Data Warehousing: Knowledge of data warehousing concepts, ETL best practices, and experience with SQL-based tools (e.g., Hive, Impala). Big Data Technologies: Familiarity with Hadoop, Kafka, and other distributed computing tools. Orchestration and Scheduling: Experience with Apache Oozie, Airflow, or similar orchestration frameworks. Scripting and Automation: Strong scripting skills in Linux. Experience: 3+ years of experience as a Data Engineer, with a strong focus on PySpark and the Cloudera Data Platform. Proven track record of implementing data engineering best practices. Experience in data ingestion, transformation, and optimization on the Cloudera Data Platform. Day-to-Day Activities: Design, develop, and maintain ETL pipelines using PySpark on CDP. Implement and manage data ingestion processes from various sources. Process, cleanse, and transform large datasets using PySpark. Conduct performance tuning and optimization of ETL processes. Implement data quality checks and validation routines. Automate data workflows using orchestration tools. Monitor pipeline performance and troubleshoot issues. Collaborate with team members to understand data requirements. Maintain documentation of data engineering processes and configurations. Qualifications: Bachelor’s or Master’s degree in Computer Science, Data Engineering, Information Systems, or a related field. Relevant certifications in PySpark and Cloudera technologies are a plus. Soft Skills: Strong analytical and problem-solving skills. Excellent verbal and written communication abilities. Ability to work independently and collaboratively in a team environment. Attention to detail and commitment to data quality.

Posted 2 months ago

Apply

5.0 - 10.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Job Requirements Job Requirements Role/ Job Title: Data Engineer - Gen AI Function/ Department: Data & Analytics Place of Work: Mumbai Job Purpose The data engineer will be working with our data scientists who are building solutions using generative AI in the domain of text, audio and images and tabular data. They will be responsible for working with large volumes of structured and unstructured data in its storage, retrieval and augmentation with our GenAI solutions which use the said data. Job & Responsibilities Build data engineering pipeline focused on unstructured data pipelines Conduct requirements gathering and project scoping sessions with subject matter experts, business users, and executive stakeholders to discover and define business data needs in GenAI. Design, build, and optimize the data architecture and extract, transform, and load (ETL) pipelines to make them accessible for Data Scientists and the products built by them. Work on end-to-end data lifecycle from Data Ingestion, Data Transformation and Data Consumption layer, versed with API and its usability Drive the highest standards in data reliability, data integrity, and data governance, enabling accurate, consistent, and trustworthy data sets A suitable candidate will also demonstrate experience with big data infrastructure inclusive of MapReduce, Hive, HDFS, YARN, HBase, MongoDB, DynamoDB, etc. Creating Technical Design Documentation of the projects/pipelines Good skills in technical debugging of the code in case of issues. Also, working with git for code versioning Education Qualification Graduation: Bachelor of Science (B.Sc) / Bachelor of Technology (B.Tech) / Bachelor of Computer Applications (BCA) Post-Graduation: Master of Science (M.Sc) /Master of Technology (M.Tech) / Master of Computer Applications (MCA Experience Range : 5-10 years of relevant experience Show more Show less

Posted 2 months ago

Apply

2.0 - 5.0 years

15 - 19 Lacs

Mumbai

Work from Office

Overview The Data Technology team at MSCI is responsible for meeting the data requirements across various business areas, including Index, Analytics, and Sustainability. Our team collates data from multiple sources such as vendors (e.g., Bloomberg, Reuters), website acquisitions, and web scraping (e.g., financial news sites, company websites, exchange websites, filings). This data can be in structured or semi-structured formats. We normalize the data, perform quality checks, assign internal identifiers, and release it to downstream applications. Responsibilities As data engineers, we build scalable systems to process data in various formats and volumes, ranging from megabytes to terabytes. Our systems perform quality checks, match data across various sources, and release it in multiple formats. We leverage the latest technologies, sources, and tools to process the data. Some of the exciting technologies we work with include Snowflake, Databricks, and Apache Spark. Qualifications Core Java, Spring Boot, Apache Spark, Spring Batch, Python. Exposure to sql databases like Oracle, Mysql, Microsoft Sql is a must. Any experience/knowledge/certification on Cloud technology preferrably Microsoft Azure or Google cloud platform is good to have. Exposures to non sql databases like Neo4j or Document database is again good to have. What we offer you Transparent compensation schemes and comprehensive employee benefits, tailored to your location, ensuring your financial security, health, and overall wellbeing. Flexible working arrangements, advanced technology, and collaborative workspaces. A culture of high performance and innovation where we experiment with new ideas and take responsibility for achieving results. A global network of talented colleagues, who inspire, support, and share their expertise to innovate and deliver for our clients. Global Orientation program to kickstart your journey, followed by access to our Learning@MSCI platform, LinkedIn Learning Pro and tailored learning opportunities for ongoing skills development. Multi-directional career paths that offer professional growth and development through new challenges, internal mobility and expanded roles. We actively nurture an environment that builds a sense of inclusion belonging and connection, including eight Employee Resource Groups. All Abilities, Asian Support Network, Black Leadership Network, Climate Action Network, Hola! MSCI, Pride & Allies, Women in Tech, and Women’s Leadership Forum. At MSCI we are passionate about what we do, and we are inspired by our purpose – to power better investment decisions. You’ll be part of an industry-leading network of creative, curious, and entrepreneurial pioneers. This is a space where you can challenge yourself, set new standards and perform beyond expectations for yourself, our clients, and our industry. MSCI is a leading provider of critical decision support tools and services for the global investment community. With over 50 years of expertise in research, data, and technology, we power better investment decisions by enabling clients to understand and analyze key drivers of risk and return and confidently build more effective portfolios. We create industry-leading research-enhanced solutions that clients use to gain insight into and improve transparency across the investment process. MSCI Inc. is an equal opportunity employer. It is the policy of the firm to ensure equal employment opportunity without discrimination or harassment on the basis of race, color, religion, creed, age, sex, gender, gender identity, sexual orientation, national origin, citizenship, disability, marital and civil partnership/union status, pregnancy (including unlawful discrimination on the basis of a legally protected parental leave), veteran status, or any other characteristic protected by law. MSCI is also committed to working with and providing reasonable accommodations to individuals with disabilities. If you are an individual with a disability and would like to request a reasonable accommodation for any part of the application process, please email Disability.Assistance@msci.com and indicate the specifics of the assistance needed. Please note, this e-mail is intended only for individuals who are requesting a reasonable workplace accommodation; it is not intended for other inquiries. To all recruitment agencies MSCI does not accept unsolicited CVs/Resumes. Please do not forward CVs/Resumes to any MSCI employee, location, or website. MSCI is not responsible for any fees related to unsolicited CVs/Resumes. Note on recruitment scams We are aware of recruitment scams where fraudsters impersonating MSCI personnel may try and elicit personal information from job seekers. Read our full note on careers.msci.com

Posted 2 months ago

Apply

4.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

What you’ll do?  Lead and mentor a team of data scientists/analysts  Provide analytical insights by analyzing various types of data, including mining our customer data, review of relevant cases/samples, and incorporation of feedback from others.  Work closely with business partners and stakeholders to determine how to design analysis, testing, and measurement approaches that will significantly improve our ability to understand and address emerging business issues.  Produce intelligent, scalable, and automated solutions by leveraging Data Science skills.  Work closely with Technology teams on development of new capabilities to define requirements and priorities based on data analysis and business knowledge.  Developing expertise in specific areas by leading analytical projects independently, while setting goals, providing benefit estimations, defining workflows, and coordinating timelines in advance.  Providing updates to leadership, peers and other stakeholders that will simplify and clarify complex concepts and the results of analyses effectively with emphasis on the actionable outcomes and impact to business. Who you need to be? 4+ years in advanced analytics, statistical modelling, and machine learning. Best practice knowledge in credit risk - strong understanding of the full lifecycle from origination to debt collection. Well-versed with ML algos, BIG data concepts, and cloud implementations. High proficiency in Python and SQL/NoSQL. Collections and Digital Channels experience a plus. Strong organizational skills and excellent follow-through Outstanding written, verbal, and interpersonal communication skills High emotional intelligence, a can-do mentality and a creative approach to problem solving Takes personal ownership, Self-starter - ability to drive projects with minimal guidance and focus on high impact work. Learns continuously; Seeks out knowledge, ideas, and feedback. Looks for opportunities to build owns skills, knowledge, and expertise. Experience with big data and cloud computing viz. Spark, Hadoop (MapReduce, PIG, HIVE) Experience in risk and credit score domains preferred. Show more Show less

Posted 2 months ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

You Lead the Way. We’ve Got Your Back. With the right backing, people and businesses have the power to progress in incredible ways. When you join Team Amex, you become part of a global and diverse community of colleagues with an unwavering commitment to back our customers, communities and each other. Here, you’ll learn and grow as we help you create a career journey that’s unique and meaningful to you with benefits, programs, and flexibility that support you personally and professionally. At American Express, you’ll be recognized for your contributions, leadership, and impact—every colleague has the opportunity to share in the company’s success. Together, we’ll win as a team, striving to uphold our company values and powerful backing promise to provide the world’s best customer experience every day. And we’ll do it with the utmost integrity, and in an environment where everyone is seen, heard and feels like they belong. Join Team Amex and let's lead the way together. As part of our diverse tech team, you can architect, code and ship software that makes us an essential part of our customer's digital lives. Here, you can work alongside talented engineers in an open, supportive, inclusive environment where your voice is valued, and you make your own decisions on what tech to use to solve challenging problems. Amex offers a range of opportunities to work with the latest technologies and encourages you to back the broader engineering community through open source. And because we understand the importance of keeping your skills fresh and relevant, we give you dedicated time to invest in your professional development. Find your place in technology on #TeamAmex. Responsibilities include but are not limited to: Owns all technical aspects of software development for assigned applications. Performs hands-on architecture, design, and development of systems. Functions as member of an agile team and helps drive consistent development practices wrt tools, common components, and documentation. Typically spends 80% of time writing code and testing, and remainder of time collaborating with stakeholders through ongoing product/platform releases. Develops deep understanding of tie-ins with other Amex systems and platforms within the supported domains. Writes code and unit tests, works on API specs, automation, and conducts code reviews and testing. Performs ongoing refactoring of code, utilizes visualization and other techniques to fast-track concepts, and deliver continuous improvement - Identifies opportunities to adopt innovative technologies. Provides continuous support for ongoing application availability. Works closely with product owners on blueprints and annual planning of feature sets that impact multiple platforms and products. Works with product owners to prioritize features for ongoing sprints and managing a list of technical requirements based on industry trends, new technologies, known defects, and issues. Qualification: Bachelor's degree in computer science, computer engineering, other technical discipline, or equivalent work experience 5+ years of software development experience Demonstrated experience with Agile or other rapid application development methods Demonstrated experience with object-oriented design and coding Demonstrated experience on these core technical skills (Mandatory) Core Java, Spring Framework, Java EE Hadoop Ecosystem (HBase, Hive, MapReduce, HDFS, Pig, Sqoop etc) Spark Relational Database (Postgres / MySQL / DB2 etc) Cloud development (Micro-services) Parallel & distributed (multi-tiered) systems Application design, software development and automated testing Demonstrated experience on these additional technical skills (Nice to Have) Unix / Shell scripting Python / Scala Message Queuing, Stream processing (Kafka) Elastic Search Webservices, open API development, and REST concepts Experience with implementing integrated automated release management using tools/technologies/frameworks like Maven, Git, code/security review tools, Jenkins, Automated testing and Junit. We back you with benefits that support your holistic well-being so you can be and deliver your best. This means caring for you and your loved ones' physical, financial, and mental health, as well as providing the flexibility you need to thrive personally and professionally: Competitive base salaries Bonus incentives Support for financial-well-being and retirement Comprehensive medical, dental, vision, life insurance, and disability benefits (depending on location) Flexible working model with hybrid, onsite or virtual arrangements depending on role and business need Generous paid parental leave policies (depending on your location) Free access to global on-site wellness centers staffed with nurses and doctors (depending on location) Free and confidential counseling support through our Healthy Minds program Career development and training opportunities American Express is an equal opportunity employer and makes employment decisions without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran status, disability status, age, or any other status protected by law. Offer of employment with American Express is conditioned upon the successful completion of a background verification check, subject to applicable laws and regulations. Show more Show less

Posted 2 months ago

Apply

6.0 - 7.0 years

12 - 17 Lacs

Mumbai

Work from Office

ole Description : As a SCALA Tech Lead, you will be a technical leader and mentor, guiding your team to deliver robust and scalable solutions. You will be responsible for setting technical direction, ensuring code quality, and fostering a collaborative and productive team environment. Your expertise in SCALA and your ability to translate business requirements into technical solutions will be crucial for delivering successful projects. Responsibilities : - Understand and implement tactical or strategic solutions for given business problems. - Discuss business needs and technology requirements with stakeholders. - Define and derive strategic solutions and identify tactical solutions when necessary. - Write technical design and other solution documents per Agile (SCRUM) standards. - Perform data analysis to aid development work and other business needs. - Develop high-quality SCALA code that meets business requirements. - Perform unit testing of developed code using automated BDD test frameworks. - Participate in testing efforts to validate and approve technology solutions. - Follow MS standards for the adoption of automated release processes across environments. - Perform automated regression test case suites and support UAT of developed solutions. - Work using collaborative techniques with other FCT (Functional Core Technology) and NFRT (Non-Functional Requirements Team) teams. - Communicate effectively with stakeholders and team members. - Provide technical guidance and mentorship to team members. - Identify opportunities for process improvements and implement effective solutions. - Drive continuous improvement in code quality, development processes, and team performance. - Participate in post-mortem reviews and implement lessons learned. Qualifications : Experience : - [Number] years of experience in software development, with a focus on SCALA. - Proven experience in leading and mentoring software development teams. - Experience in designing and implementing complex SCALA-based solutions. - Strong proficiency in SCALA programming language. - Experience with functional programming concepts and libraries. - Knowledge of distributed systems and data processing technologies. - Experience with automated testing frameworks (BDD). - Familiarity with Agile (SCRUM) methodologies. - Experience with CI/CD pipelines and DevOps practices. - Understanding of data analysis and database technologies.

Posted 2 months ago

Apply

6.0 - 8.0 years

8 - 10 Lacs

Mumbai, Delhi / NCR, Bengaluru

Work from Office

JobOpening Big Data Engineer (Remote, Contract 6 Months+) Location: Remote | Contract Duration: 6+ Months | Domain: Big Data Stack We are looking for a Senior Big Data Engineer with deep expertise in large-scale data processing technologies and frameworks. This is a remote, contract-based position suited for a data engineering expert with strong experience in the Big Data ecosystem including Snowflake (Snowpark), Spark, MapReduce, Hadoop, and more. #KeyResponsibilities Design, develop, and maintain scalable data pipelines and big data solutions. Implement data transformations using Spark, Snowflake (Snowpark), Pig, and Sqoop. Process large data volumes from diverse sources using Hadoop ecosystem tools. Build end-to-end data workflows for batch and streaming pipelines. Optimize data storage and retrieval processes in HBase, Hive, and other NoSQL databases. Collaborate with data scientists and business stakeholders to design robust data infrastructure. Ensure data integrity, consistency, and security in line with organizational policies. Troubleshoot and tune performance for distributed systems and applications. #MustHaveSkills in Data Engineering / Big Data Tools: Snowflake (Snowpark), Spark, MapReduce, Hadoop, Sqoop, Pig, HBase Data Ingestion & ETL, Data Pipeline Design, Distributed Computing Strong understanding of Big Data architectures & performance tuning Hands-on experience with large-scale data storage and query optimization #NiceToHave Apache Airflow / Oozie experience Knowledge of cloud platforms (AWS, Azure, or GCP) Proficiency in Python or Scala CI/CD and DevOps exposure #ContractDetails Role: Senior Big Data Engineer Location: Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune, Remote Duration: 6+ Months (Contract) Apply via Email: navaneeta@suzva.com Contact: 9032956160 #HowToApply Send your updated resume with the subject: "Application for Remote Big Data Engineer Contract Role" Include in your email: Updated Resume Current CTC Expected CTC Current Location Notice Period / Availabilit

Posted 2 months ago

Apply

11.0 - 20.0 years

30 - 45 Lacs

Hyderabad, Bengaluru

Hybrid

We are currently seeking a Senior Principal Consultant in our Technical Services department within the Oracle NetSuite Customer Success Consulting team. This position requires heavy interaction with customers, other NetSuite application developers, NetSuite implementation teams, and NetSuite partners. This individual will be a part of the team that scopes, designs, develops, and deploys custom scripts, integrations, and workflow solutions for our customers. Job description displayed in the job posting An experienced consulting professional who has a broad understanding of solutions, industry best practices, multiple business processes or technology designs within a product/technology family. Operates independently to provide quality work products to an engagement. Performs varied and complex duties and tasks that need independent judgment, in order to implement Oracle products and technology to meet customer needs. Applies Oracle methodology, company procedures, and leading practices. Typical Workload: Each day can be very different. It is a great job for people looking to step out of normal routines. We have a broad range of responsibilities. The typical workload breaks down into the rough percentages: • 30% scripting, QA, creating and executing test plans. • 30% is customer facing, walking though use cases, reviewing test plans, and sometimes just brought in to solve problems. • 15% attend internal meetings including knowledge transfers, mentoring sessions for less experienced resources, and other strategic initiatives • 10% integration consulting with customer and third-party systems • 10% executive updates, documentation, and overall project management. • 5% data migration Responsibilities include: Track and report project progress to appropriate parties using NetSuite and Jira • Assist in defining custom scripts on NetSuites SuiteCloud platform • Collaborate with other NetSuite consultants to validate business/technical requirements through interview and analysis • Produce system design documents and participate in technical walkthroughs • Lead technical work streams, design scripts, or validate scripts, coordinate with other developers, Quality Assurance (QA) and deployment activities • Conduct Code Reviews • Conduct user acceptance testing for complex solutions • Assist in development, QA, and deployment processes as necessary to meet project requirements • Ability to work in a global team environment • Mentor less experienced consultants Preferred Qualifications/Skills include: • 10+ years of NetSuite or other ERP / CRM Solutions. NetSuite highly preferred. • A degree in mathematics, computer science or engineering • NetSuite SuiteCloud Development/Design/Testing/Code Review experience, including 3rd party Integration • Experience leading technical work streams including other developers and including global delivery teams. • Exposure to system architecture, object-oriented design, web frameworks and patterns, experience strongly preferred • Ability to author detailed documents capturing workflow processes, use cases, exception handling, and test cases • Consulting role experience • Software development lifecycle (SDLC) methodology knowledge and use • Software development (JavaScript preferred) • Proficiency in error resolution, error handling and debugging. • Experience with IDEs (WebStorm preferred), source control systems (GIT preferred), unit-testing tools and defect management tools • Experience with XML/XSL and Web Services (SOAP, WSDL, REST, JSON) • Experience developing web applications using JSP/Servlets, DHTML, and JavaScript • Experience with Jira • Strong interpersonal and communication skills

Posted 2 months ago

Apply

6.0 - 11.0 years

25 - 40 Lacs

Hyderabad, Bengaluru

Hybrid

We are currently seeking a Senior Principal Consultant in our Technical Services department within the Oracle NetSuite Customer Success Consulting team. This position requires heavy interaction with customers, other NetSuite application developers, NetSuite implementation teams, and NetSuite partners. This individual will be a part of the team that scopes, designs, develops, and deploys custom scripts, integrations, and workflow solutions for our customers. Job description displayed in the job posting An experienced consulting professional who has a broad understanding of solutions, industry best practices, multiple business processes or technology designs within a product/technology family. Operates independently to provide quality work products to an engagement. Performs varied and complex duties and tasks that need independent judgment, in order to implement Oracle products and technology to meet customer needs. Applies Oracle methodology, company procedures, and leading practices. Typical Workload: Each day can be very different. It is a great job for people looking to step out of normal routines. We have a broad range of responsibilities. The typical workload breaks down into the rough percentages: • 30% scripting, QA, creating and executing test plans. • 30% is customer facing, walking though use cases, reviewing test plans, and sometimes just brought in to solve problems. • 15% attend internal meetings including knowledge transfers, mentoring sessions for less experienced resources, and other strategic initiatives • 10% integration consulting with customer and third-party systems • 10% executive updates, documentation, and overall project management. • 5% data migration Responsibilities include: Track and report project progress to appropriate parties using NetSuite and Jira • Assist in defining custom scripts on NetSuites SuiteCloud platform • Collaborate with other NetSuite consultants to validate business/technical requirements through interview and analysis • Produce system design documents and participate in technical walkthroughs • Lead technical work streams, design scripts, or validate scripts, coordinate with other developers, Quality Assurance (QA) and deployment activities • Conduct Code Reviews • Conduct user acceptance testing for complex solutions • Assist in development, QA, and deployment processes as necessary to meet project requirements • Ability to work in a global team environment • Mentor less experienced consultants Preferred Qualifications/Skills include: • 6+ years of NetSuite or other ERP / CRM Solutions. NetSuite highly preferred. • A degree in mathematics, computer science or engineering • NetSuite SuiteCloud Development/Design/Testing/Code Review experience, including 3rd party Integration • Experience leading technical work streams including other developers and including global delivery teams. • Exposure to system architecture, object-oriented design, web frameworks and patterns, experience strongly preferred • Ability to author detailed documents capturing workflow processes, use cases, exception handling, and test cases • Consulting role experience • Software development lifecycle (SDLC) methodology knowledge and use • Software development (JavaScript preferred) • Proficiency in error resolution, error handling and debugging. • Experience with IDEs (WebStorm preferred), source control systems (GIT preferred), unit-testing tools and defect management tools • Experience with XML/XSL and Web Services (SOAP, WSDL, REST, JSON) • Experience developing web applications using JSP/Servlets, DHTML, and JavaScript • Experience with Jira • Strong interpersonal and communication skills

Posted 2 months ago

Apply

1.0 - 4.0 years

1 - 5 Lacs

Mumbai

Work from Office

Location Mumbai Role Overview : As a Big Data Engineer, you'll design and build robust data pipelines on Cloudera using Spark (Scala/PySpark) for ingestion, transformation, and processing of high-volume data from banking systems. Key Responsibilities : Build scalable batch and real-time ETL pipelines using Spark and Hive Integrate structured and unstructured data sources Perform performance tuning and code optimization Support orchestration and job scheduling (NiFi, Airflow) Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Skills Required : Proficiency in PySpark/Scala with Hive/Impala Experience with data partitioning, bucketing, and optimization Familiarity with Kafka, Iceberg, NiFi is a must Knowledge of banking or financial datasets is a plus

Posted 2 months ago

Apply

2.0 - 5.0 years

14 - 17 Lacs

Hyderabad

Work from Office

As an Application Developer, you will lead IBM into the future by translating system requirements into the design and development of customized systems in an agile environment. The success of IBM is in your hands as you transform vital business needs into code and drive innovation. Your work will power IBM and its clients globally, collaborating and integrating code into enterprise systems. You will have access to the latest education, tools and technology, and a limitless career path with the world’s technology leader. Come to IBM and make a global impact Responsibilities: Responsible to manage end to end feature development and resolve challenges faced in implementing the same Learn new technologies and implement the same in feature development within the time frame provided Manage debugging, finding root cause analysis and fixing the issues reported on Content Management back end software system fixing the issues reported on Content Management back end software system Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Overall, more than 6 years of experience with more than 4+ years of Strong Hands on experience in Python and Spark Strong technical abilities to understand, design, write and debug to develop applications on Python and Pyspark. Good to Have;- Hands on Experience on cloud technology AWS/GCP/Azure strong problem-solving skill Preferred technical and professional experience Good to Have;- Hands on Experience on cloud technology AWS/GCP/Azure

Posted 2 months ago

Apply

0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Our Purpose Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. Title And Summary Senior Analyst, Data Strategy Overview The Data Quality team in the Data Strategy & Management organization (Chief Data Office) is responsible for developing and driving Mastercard’s core data management program and broadening its data quality efforts. This team ensures that Mastercard maintains and increases the value of Mastercard’s data assets as we enable new forms of payment, new players in the ecosystem and expand collection and use of data to support new lines of business. Enterprise Data Quality team for is responsible for ensuring data is of the highest quality and is fit for purpose to support business and analytic uses. The team works to identify and prioritize opportunities for data quality improvement, develops strategic mitigation plans and coordinates remediation activities with MC Tech and the business owner. Role Support the processes for improving and expanding merchant data, including address standardization, geocoding, incorporating new merchant data sources to assist in improvements. Assess quality issues in merchant data, transactional data, and other critical data sets. Support internal and external feedback loop to improve data submitted through the transaction data stream. Solution and present data challenges in a manner suitable for product and business understanding. Provide subject matter expertise on merchant data for the organization’s product development efforts Coordinate with MC Tech to develop DQ remediation requirements for core systems, data warehouse and other critical applications. Manage corresponding remediation projects to ensure successful implementation. Lead organization-wide awareness and communication of data quality initiatives and remediation activities, ensuring seamless implementation. Coordinate with critical vendors to manage project timelines and achieve quality deliverables Develop and implement with MC Tech data pipelines that extracts, transforms, and loads data into an information product that supports organizational strategic goals Implement new technologies and frameworks as per project requirements All About You Hands-on experience managing technology projects with demonstrated ability to understand complex data and technology initiatives Ability to lead and influence others to advance deliverables Understanding of emerging technologies including but not limited to, cloud architecture, machine learning/AI and Big Data infrastructure Data architecture experience and experience in building data models. Experience deploying and working with big data technologies like Hadoop, Spark, and Sqoop. Experience with streaming frameworks like Kafka and Axon and pipelines like Nifi, Proficient in OO programming (Python Java/Springboot/J2EE, and Scala) Experience with the Hadoop Ecosystem (HDFS, Yarn, MapReduce, Spark, Hive, Impala), Experience with Linux, Unix command line, Unix Shell Scripting, SQL and any Scripting language Experience with data visualization tools such as Tableau, Domo, and/or PowerBI is a plus. Experience presenting data findings in a readable and insight driven format. Experience building support decks. Corporate Security Responsibility All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines. R-249888 Show more Show less

Posted 2 months ago

Apply

6.0 - 10.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Description Join GlobalLogic, to be a valid part of the team working on a huge software project for the world-class company providing M2M / IoT 4G/5G modules e.g. to the automotive, healthcare and logistics industries. Through our engagement, we contribute to our customer in developing the end-user modules’ firmware, implementing new features, maintaining compatibility with the newest telecommunication and industry standards, as well as performing analysis and estimations of the customer requirements. Requirements BA / BS degree in Computer Science, Mathematics or related technical field, or equivalent practical experience. Experience in Cloud SQL and Cloud Bigtable Experience in Dataflow, BigQuery, Dataproc, Datalab, Dataprep, Pub / Sub and Genomics Experience in Google Transfer Appliance, Cloud Storage Transfer Service, BigQuery Data Transfer Experience with data processing software (such as Hadoop, Kafka, Spark, Pig, Hive) and with data processing algorithms (MapReduce, Flume). Experience working with technical customers. Experience in writing software in one or more languages such as Java, Python 6-10 years of relevant consulting, industry or technology experience Strong problem solving and troubleshooting skills Strong communicator Job responsibilities Experience working data warehouses, including data warehouse technical architectures, infrastructure components, ETL / ELT and reporting / analytic tools and environments. Experience in technical consulting. Experience architecting, developing software, or internet scale production-grade Big Data solutions in virtualized environments such as Google Cloud Platform (mandatory) and AWS / Azure(good to have) Experience working with big data, information retrieval, data mining or machine learning as well as experience in building multi-tier high availability applications with modern web technologies (such as NoSQL, Kafka,NPL, MongoDB, SparkML, Tensorflow). Working knowledge of ITIL and / or agile methodologies Google Data Engineer certified What we offer Culture of caring. At GlobalLogic, we prioritize a culture of caring. Across every region and department, at every level, we consistently put people first. From day one, you’ll experience an inclusive culture of acceptance and belonging, where you’ll have the chance to build meaningful connections with collaborative teammates, supportive managers, and compassionate leaders. Learning and development. We are committed to your continuous learning and development. You’ll learn and grow daily in an environment with many opportunities to try new things, sharpen your skills, and advance your career at GlobalLogic. With our Career Navigator tool as just one example, GlobalLogic offers a rich array of programs, training curricula, and hands-on opportunities to grow personally and professionally. Interesting & meaningful work. GlobalLogic is known for engineering impact for and with clients around the world. As part of our team, you’ll have the chance to work on projects that matter. Each is a unique opportunity to engage your curiosity and creative problem-solving skills as you help clients reimagine what’s possible and bring new solutions to market. In the process, you’ll have the privilege of working on some of the most cutting-edge and impactful solutions shaping the world today. Balance and flexibility. We believe in the importance of balance and flexibility. With many functional career areas, roles, and work arrangements, you can explore ways of achieving the perfect balance between your work and life. Your life extends beyond the office, and we always do our best to help you integrate and balance the best of work and life, having fun along the way! High-trust organization. We are a high-trust organization where integrity is key. By joining GlobalLogic, you’re placing your trust in a safe, reliable, and ethical global company. Integrity and trust are a cornerstone of our value proposition to our employees and clients. You will find truthfulness, candor, and integrity in everything we do. About GlobalLogic GlobalLogic, a Hitachi Group Company, is a trusted digital engineering partner to the world’s largest and most forward-thinking companies. Since 2000, we’ve been at the forefront of the digital revolution – helping create some of the most innovative and widely used digital products and experiences. Today we continue to collaborate with clients in transforming businesses and redefining industries through intelligent products, platforms, and services. Show more Show less

Posted 2 months ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

We are looking for energetic, high-performing and highly skilled Quality Assurance Engineer to help shape our technology and product roadmap. You will be part of the fast-paced, entrepreneurial Enterprise Personalization portfolio focused on delivering the next generation global marketing capabilities. About the Role This team is responsible for Global campaign tracking of new accounts acquisition and bounty payments and leverages transformational technologies, such as SQL, Hadoop, Spark, Pyspark, HDFS, MapReduce, Hive, HBase, Kafka & Java. Responsibilities Provides domain expertise to engineers on Automation, Testing and Quality Assurance (QA) methodologies and processes. Crafts and executes test scripts. Assists in preparation of test strategies. Sets up and maintains test data & environments. Logs results. High-performing and highly skilled in Quality Assurance. Expertise in Automation, Testing and QA methodologies. Proficient in SQL, Hadoop, Spark, Pyspark, HDFS, MapReduce, Hive, HBase, Kafka & Java. If you are interested and can join us with in 0-30 Days, please do share your resume at- vaishali.tyagi@impetus.com Show more Show less

Posted 2 months ago

Apply

4.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Job Description ABOUT GOLDMAN SACHS Goldman Sachs is a leading global investment banking, securities and investment management firm that provides a wide range of financial services to a substantial and diversified client base that includes corporations, financial institutions, governments and individuals. Founded in 1869, the firm is headquartered in New York and maintains offices in all major financial centres around the world. Divisional Overview The Goldman Sachs Compliance Division prevents, detects and mitigates regulatory and reputational risk across the firm, and helps to strengthen the firm's culture of compliance. As an independent control function and part of the firm's second line of defense, Compliance: Assesses the firm's compliance, regulatory and reputational risk Monitors for compliance with new or amended laws, rules and regulations Designs and implements controls, policies, procedures and training Conducts independent testing Leads the firm's response to regulatory examinations, audits and inquiries Compliance Engineering empowers these activities by building and operating a suite of software platforms and applications. We are a team of more than 300 engineers and scientists who work on the most complex, mission-critical problems. We have access to the latest technology and to massive amounts of structured and unstructured data. We leverage modern frameworks to build responsive and intuitive UX/UI and Big Data applications, incorporating cutting-edge AI and efficient processes to drive them. Roles & Responsibilities As a Big Data Engineer, you will: Design, develop and maintain data pipelines and anomaly detection models which process huge volumes of trading and trade related data. Be instrumental in Providing technical direction to leverage the most relevant technologies for the different business use cases. Work towards adapting of the most optimal technologies as well as improving the adoption of the latest technologies across engineering teams. Driving Quality: implementing processes and procedures to maximize the overall quality of software Develop an understanding of evolving regulatory frameworks & leverage cutting edge technologies to manage firm regulatory and reputational risk Develop and drive the institutionalization of best practices around coding, testing – scalability, stability and quality, code reviews and operational readiness across the teams Collaboration with Local & Global teams allowing for technical discussions, decision making, driving standard code architecture and identifying potential blockers at an early stage Have an opportunity to work on a broad range of problems, including negotiating data contracts and capturing data quality metrics Technical Skills & Qualifications MBA or Bachelor/ Master's degree in Computer Science, Computer Engineering, or a similar field of study. 4+ years’ experience in software development Experience in developing and designing end-to-end solutions to enterprise standards including automated testing and SDLC. Ability (and tenacity) to clearly express ideas and arguments in meetings and on paper. Experience with one or more of ; Java, Spark, Hadoop, Flink, MapReduce, HBase, JSON, Protobuf, Presto, Elastic Search, Kafka, Kubernetes Expertise in Java or any other Object-Oriented Language, as well as proficiency with databases and data manipulation Data analysis using tools such as SQL, Spark SQL, Zeppelin/Jupyter About Goldman Sachs At Goldman Sachs, we commit our people, capital and ideas to help our clients, shareholders and the communities we serve to grow. Founded in 1869, we are a leading global investment banking, securities and investment management firm. Headquartered in New York, we maintain offices around the world. We believe who you are makes you better at what you do. We're committed to fostering and advancing diversity and inclusion in our own workplace and beyond by ensuring every individual within our firm has a number of opportunities to grow professionally and personally, from our training and development opportunities and firmwide networks to benefits, wellness and personal finance offerings and mindfulness programs. Learn more about our culture, benefits, and people at GS.com/careers. We’re committed to finding reasonable accommodations for candidates with special needs or disabilities during our recruiting process. Learn more: https://www.goldmansachs.com/careers/footer/disability-statement.html © The Goldman Sachs Group, Inc., 2023. All rights reserved. Goldman Sachs is an equal employment/affirmative action employer Show more Show less

Posted 2 months ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Waymo is an autonomous driving technology company with the mission to be the most trusted driver. Since its start as the Google Self-Driving Car Project in 2009, Waymo has focused on building the Waymo Driver—The World's Most Experienced Driver™—to improve access to mobility while saving thousands of lives now lost to traffic crashes. The Waymo Driver powers Waymo One, a fully autonomous ride-hailing service, and can also be applied to a range of vehicle platforms and product use cases. The Waymo Driver has provided over one million rider-only trips, enabled by its experience autonomously driving tens of millions of miles on public roads and tens of billions in simulation across 13+ U.S. states. The Business Systems team defines and builds the business IT architecture that supports the commercial operation of Waymo's autonomous fleet. The team stands at the intersection of finance, operations and IT; through creative architecture, expertise and attention to detail, it ensures that Waymo has world-class capabilities across all its business processes. You will report to our Sr. Manager, Supply Chain & GTM Systems and will be based in Hyderabad, India. You Will Design, develop, deploy, and maintain integration processes between cloud and on-premise applications using middleware solutions (e.g., Technologies such as Apache Beam/Spark, MapReduce, BigData, DataFusion ). Lead and manage multiple systems development projects, collaborating across Waymo, Alphabet, and external partners. Work with stakeholders to understand requirements and develop solutions. Uphold high development standards (design, configuration, testing, issue resolution). Write clean, modular, and maintainable code. Build implementations that increase business value (e.g., automating processes using reusable services). You Have Bachelor's degree in Computer Science or a related field, or equivalent practical experience. 8+ years of experience engineering complex integration projects with SaaS and on-premise applications. 6+ years of programming experience in Java (required), Python, Web Services (RESTful, SOAP), and SQL. 2+ years of experience with the Integration Platform Strong interpersonal, troubleshooting, and analytical skills. We Prefer Familiarity with Google Cloud Platform (Cloud Storage, BigQuery, Cloud Pub/Sub, Data Fusion) Experience with developing data pipelines using Google internal environment and data extraction using ETL tools such as Data fusion etc Experience integrating with SAP, Ariba, Workday, and Salesforce cloud systems The expected base salary range for this full-time position is listed below. Actual starting pay will be based on job-related factors, including exact work location, experience, relevant training and education, and skill level. Waymo employees are also eligible to participate in Waymo’s discretionary annual bonus program, equity incentive plan, and generous Company benefits program, subject to eligibility requirements. Salary Range ₹3,000,000—₹3,630,000 INR Show more Show less

Posted 2 months ago

Apply

7.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

dunnhumby is the global leader in Customer Data Science, empowering businesses everywhere to compete and thrive in the modern data-driven economy. We always put the Customer First. Our mission: to enable businesses to grow and reimagine themselves by becoming advocates and champions for their Customers. With deep heritage and expertise in retail – one of the world’s most competitive markets, with a deluge of multi-dimensional data – dunnhumby today enables businesses all over the world, across industries, to be Customer First. dunnhumby employs nearly 2,500 experts in offices throughout Europe, Asia, Africa, and the Americas working for transformative, iconic brands such as Tesco, Coca-Cola, Meijer, Procter & Gamble and Metro. dunnhumby is the global leader in Customer Data Science, empowering businesses everywhere to compete and thrive in the modern data-driven economy. We always put the Customer First. Our mission: to enable businesses to grow and reimagine themselves by becoming advocates and champions for their Customers. With deep heritage and expertise in retail – one of the world’s most competitive markets, with a deluge of multi-dimensional data – dunnhumby today enables businesses all over the world, across industries, to be Customer First. dunnhumby employs nearly 2,500 experts in offices throughout Europe, Asia, Africa, and the Americas working for transformative, iconic brands such as Tesco, Coca-Cola, Meijer, Procter & Gamble and Metro. We’re looking for a Sr Big Data Engineer who expects more from their career. It’s chance to extend and improve dunnhumby’s Data Engineering Team.It’s an opportunity to work with a market-leading business to explore new opportunities for us and influence global retailers. Key Responsibilities Design end-to-end data solutions, including data lakes, data warehouses, ETL/ELT pipelines, APIs, and analytics platforms. Architect scalable and low-latency data pipelines using tools like Apache Kafka, Flink, or Spark Streaming to handle high-velocity data streams. Design /Orchestrate end-to-end automation using orchestration frameworks such as Apache Airflow to manage complex workflows and dependencies. Design intelligent systems that can detect anomalies, trigger alerts, and automatically reroute or restart processes to maintain data integrity and availability. Develop scalable data architecture strategies that support advanced analytics, machine learning, and real-time data processing. Define and implement data governance, metadata management, and data quality standards. Lead architectural reviews and technical design sessions to guide solution development. Partner with business and IT teams to translate business needs into data architecture requirements. Explore appropriate tools, platforms, and technologies aligned with organizational standards. Ensure security, compliance, and regulatory requirements are addressed in all data solutions. Evaluate and recommend improvements to existing data architecture and processes. Provide mentorship and guidance to data engineers and technical teams. Technical Expertise Bachelor's or master's degree in computer science, Information Systems, Data Science, or related field. 7+ years of experience in data architecture, data engineering, or a related field. Proficient in data pipeline tools such as Apache Spark, Kafka, Airflow, or similar. Experience with data governance frameworks and tools (e.g., Collibra, Alation, OpenMetadata). Strong knowledge of cloud platforms (Azure or Google Cloud), especially with cloud-native data services. Strong understanding of API design and data security best practices. Familiarity with data mesh, data fabric, or other emerging architectural patterns. Experience working in Agile or DevOps environments. Experience with modern data stack tools (e.g., dbt, Snowflake, Databricks). Extensive experience with high level programming languages - Python, Java & Scala Experience with Hive, Oozie, Airflow, HBase, MapReduce, Spark along with working knowledge of Hadoop/Spark Toolsets. Extensive Experience working with Git and Process Automation In depth understanding of relational database management systems (RDBMS) and Data Flow Development. Soft Skills Problem-Solving: Strong analytical skills to troubleshoot and resolve complex data pipeline issues. Communication: Ability to articulate technical concepts to non-technical stakeholders and document processes clearly. Collaboration: Experience working in cross-functional teams and managing stakeholder expectations. Adaptability: Willingness to learn new tools and technologies to stay ahead in the rapidly evolving data landscape. What You Can Expect From Us We won’t just meet your expectations. We’ll defy them. So you’ll enjoy the comprehensive rewards package you’d expect from a leading technology company. But also, a degree of personal flexibility you might not expect. Plus, thoughtful perks, like flexible working hours and your birthday off. You’ll also benefit from an investment in cutting-edge technology that reflects our global ambition. But with a nimble, small-business feel that gives you the freedom to play, experiment and learn. And we don’t just talk about diversity and inclusion. We live it every day – with thriving networks including dh Gender Equality Network, dh Proud, dh Family, dh One and dh Thrive as the living proof. Everyone’s invited. Our approach to Flexible Working At dunnhumby, we value and respect difference and are committed to building an inclusive culture by creating an environment where you can balance a successful career with your commitments and interests outside of work. We believe that you will do your best at work if you have a work / life balance. Some roles lend themselves to flexible options more than others, so if this is important to you please raise this with your recruiter, as we are open to discussing agile working opportunities during the hiring process. What You Can Expect From Us We won’t just meet your expectations. We’ll defy them. So you’ll enjoy the comprehensive rewards package you’d expect from a leading technology company. But also, a degree of personal flexibility you might not expect. Plus, thoughtful perks, like flexible working hours and your birthday off. You’ll also benefit from an investment in cutting-edge technology that reflects our global ambition. But with a nimble, small-business feel that gives you the freedom to play, experiment and learn. And we don’t just talk about diversity and inclusion. We live it every day – with thriving networks including dh Gender Equality Network, dh Proud, dh Family, dh One and dh Thrive as the living proof. We want everyone to have the opportunity to shine and perform at your best throughout our recruitment process. Please let us know how we can make this process work best for you. For an informal and confidential chat please contact stephanie.winson@dunnhumby.com to discuss how we can meet your needs. Our approach to Flexible Working At dunnhumby, we value and respect difference and are committed to building an inclusive culture by creating an environment where you can balance a successful career with your commitments and interests outside of work. We believe that you will do your best at work if you have a work / life balance. Some roles lend themselves to flexible options more than others, so if this is important to you please raise this with your recruiter, as we are open to discussing agile working opportunities during the hiring process. For further information about how we collect and use your personal information please see our Privacy Notice which can be found (here) Show more Show less

Posted 2 months ago

Apply

3.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

dunnhumby is the global leader in Customer Data Science, empowering businesses everywhere to compete and thrive in the modern data-driven economy. We always put the Customer First. Our mission: to enable businesses to grow and reimagine themselves by becoming advocates and champions for their Customers. With deep heritage and expertise in retail – one of the world’s most competitive markets, with a deluge of multi-dimensional data – dunnhumby today enables businesses all over the world, across industries, to be Customer First. dunnhumby employs nearly 2,500 experts in offices throughout Europe, Asia, Africa, and the Americas working for transformative, iconic brands such as Tesco, Coca-Cola, Meijer, Procter & Gamble and Metro. Most companies try to meet expectations, dunnhumby exists to defy them. Using big data, deep expertise and AI-driven platforms to decode the 21st century human experience – then redefine it in meaningful and surprising ways that put customers first. Across digital, mobile and retail. For brands like Tesco, Coca-Cola, Procter & Gamble and PepsiCo. We’re looking for a Big Data Engineer who expects more from their career. It’s a chance to extend and improve dunnhumby’s Data Engineering Team It’s an opportunity to work with a market-leading business to explore new opportunities for us and influence global retailers. Joining our team, you’ll work with world class and passionate people which is part of Innovation Technology. You will be responsible for working with stakeholders in the development of data technology that meet the goals of the dunnhumby technology strategy and data principles. Additionally, this individual will be called upon to contribute to a growing list of dunnhumby data best practices. Key Responsibilities Build end-to-end data solutions, including data lakes, data warehouses, ETL/ELT pipelines, APIs, and analytics platforms. Build scalable and low-latency data pipelines using tools like Apache Kafka, Flink, or Spark Streaming to handle high-velocity data streams. Automate data pipelines and processes end-to-end using orchestration frameworks such as Apache Airflow to manage complex workflows and dependencies. Develop intelligent systems that can detect anomalies, trigger alerts, and automatically reroute or restart processes to maintain data integrity and availability. Develop pipeline for real-time data processing. Implement data governance, metadata management, and data quality standards. Explore appropriate tools, platforms, and technologies aligned with organizational standards. Ensure security, compliance, and regulatory requirements are addressed in all data solutions. Evaluate and recommend improvements to existing data architecture and processes. Technical Expertise Bachelor's or master's degree in computer science, Information Systems, Data Science, or related field. 3+ years of experience in data architecture, data engineering, or a related field. Proficient in data pipeline tools such as Apache Spark, Kafka, Airflow, or similar. Familiarity with data governance frameworks and tools (e.g., Collibra, Alation, OpenMetadata). Good experience of cloud platforms (Azure or Google Cloud), especially with cloud-native data services. Familiarity of API design and data security best practices. Familiarity with data mesh, data fabric, or other emerging architectural patterns. Experience working in Agile or DevOps environments. Extensive experience with high level programming languages - Python, Java & Scala Experience with Hive, Oozie, Airflow, HBase, MapReduce, Spark along with working knowledge of Hadoop/Spark Toolsets. Experience working with Git and Process Automation In depth understanding of relational database management systems (RDBMS) and Data Flow Development Soft Skills Problem-Solving: Strong analytical skills to troubleshoot and resolve complex data pipeline issues. Communication: Ability to articulate technical concepts to non-technical stakeholders and document processes clearly. Collaboration: Experience working in cross-functional teams Adaptability: Willingness to learn new tools and technologies to stay ahead in the rapidly evolving data landscape. What You Can Expect From Us We won’t just meet your expectations. We’ll defy them. So you’ll enjoy the comprehensive rewards package you’d expect from a leading technology company. But also, a degree of personal flexibility you might not expect. Plus, thoughtful perks, like flexible working hours and your birthday off. You’ll also benefit from an investment in cutting-edge technology that reflects our global ambition. But with a nimble, small-business feel that gives you the freedom to play, experiment and learn. And we don’t just talk about diversity and inclusion. We live it every day – with thriving networks including dh Gender Equality Network, dh Proud, dh Family, dh One and dh Thrive as the living proof. Everyone’s invited. Our approach to Flexible Working At dunnhumby, we value and respect difference and are committed to building an inclusive culture by creating an environment where you can balance a successful career with your commitments and interests outside of work. We believe that you will do your best at work if you have a work / life balance. Some roles lend themselves to flexible options more than others, so if this is important to you please raise this with your recruiter, as we are open to discussing agile working opportunities during the hiring process. For further information about how we collect and use your personal information please see our Privacy Notice which can be found (here) What You Can Expect From Us We won’t just meet your expectations. We’ll defy them. So you’ll enjoy the comprehensive rewards package you’d expect from a leading technology company. But also, a degree of personal flexibility you might not expect. Plus, thoughtful perks, like flexible working hours and your birthday off. You’ll also benefit from an investment in cutting-edge technology that reflects our global ambition. But with a nimble, small-business feel that gives you the freedom to play, experiment and learn. And we don’t just talk about diversity and inclusion. We live it every day – with thriving networks including dh Gender Equality Network, dh Proud, dh Family, dh One and dh Thrive as the living proof. We want everyone to have the opportunity to shine and perform at your best throughout our recruitment process. Please let us know how we can make this process work best for you. For an informal and confidential chat please contact stephanie.winson@dunnhumby.com to discuss how we can meet your needs. Our approach to Flexible Working At dunnhumby, we value and respect difference and are committed to building an inclusive culture by creating an environment where you can balance a successful career with your commitments and interests outside of work. We believe that you will do your best at work if you have a work / life balance. Some roles lend themselves to flexible options more than others, so if this is important to you please raise this with your recruiter, as we are open to discussing agile working opportunities during the hiring process. For further information about how we collect and use your personal information please see our Privacy Notice which can be found (here) Show more Show less

Posted 2 months ago

Apply

3.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

dunnhumby is the global leader in Customer Data Science, empowering businesses everywhere to compete and thrive in the modern data-driven economy. We always put the Customer First. Our mission: to enable businesses to grow and reimagine themselves by becoming advocates and champions for their Customers. With deep heritage and expertise in retail – one of the world’s most competitive markets, with a deluge of multi-dimensional data – dunnhumby today enables businesses all over the world, across industries, to be Customer First. dunnhumby employs nearly 2,500 experts in offices throughout Europe, Asia, Africa, and the Americas working for transformative, iconic brands such as Tesco, Coca-Cola, Meijer, Procter & Gamble and Metro. Most companies try to meet expectations, dunnhumby exists to defy them. Using big data, deep expertise and AI-driven platforms to decode the 21st century human experience – then redefine it in meaningful and surprising ways that put customers first. Across digital, mobile and retail. For brands like Tesco, Coca-Cola, Procter & Gamble and PepsiCo. We’re looking for a Big Data Engineer who expects more from their career. It’s a chance to extend and improve dunnhumby’s Data Engineering Team It’s an opportunity to work with a market-leading business to explore new opportunities for us and influence global retailers. Joining our team, you’ll work with world class and passionate people which is part of Innovation Technology. You will be responsible for working with stakeholders in the development of data technology that meet the goals of the dunnhumby technology strategy and data principles. Additionally, this individual will be called upon to contribute to a growing list of dunnhumby data best practices. Key Responsibilities Build end-to-end data solutions, including data lakes, data warehouses, ETL/ELT pipelines, APIs, and analytics platforms. Build scalable and low-latency data pipelines using tools like Apache Kafka, Flink, or Spark Streaming to handle high-velocity data streams. Automate data pipelines and processes end-to-end using orchestration frameworks such as Apache Airflow to manage complex workflows and dependencies. Develop intelligent systems that can detect anomalies, trigger alerts, and automatically reroute or restart processes to maintain data integrity and availability. Develop pipeline for real-time data processing. Implement data governance, metadata management, and data quality standards. Explore appropriate tools, platforms, and technologies aligned with organizational standards. Ensure security, compliance, and regulatory requirements are addressed in all data solutions. Evaluate and recommend improvements to existing data architecture and processes. Technical Expertise Bachelor's or master's degree in computer science, Information Systems, Data Science, or related field. 3+ years of experience in data architecture, data engineering, or a related field. Proficient in data pipeline tools such as Apache Spark, Kafka, Airflow, or similar. Familiarity with data governance frameworks and tools (e.g., Collibra, Alation, OpenMetadata). Good experience of cloud platforms (Azure or Google Cloud), especially with cloud-native data services. Familiarity of API design and data security best practices. Familiarity with data mesh, data fabric, or other emerging architectural patterns. Experience working in Agile or DevOps environments. Extensive experience with high level programming languages - Python, Java & Scala Experience with Hive, Oozie, Airflow, HBase, MapReduce, Spark along with working knowledge of Hadoop/Spark Toolsets. Experience working with Git and Process Automation In depth understanding of relational database management systems (RDBMS) and Data Flow Development Soft Skills Problem-Solving: Strong analytical skills to troubleshoot and resolve complex data pipeline issues. Communication: Ability to articulate technical concepts to non-technical stakeholders and document processes clearly. Collaboration: Experience working in cross-functional teams Adaptability: Willingness to learn new tools and technologies to stay ahead in the rapidly evolving data landscape. What You Can Expect From Us We won’t just meet your expectations. We’ll defy them. So you’ll enjoy the comprehensive rewards package you’d expect from a leading technology company. But also, a degree of personal flexibility you might not expect. Plus, thoughtful perks, like flexible working hours and your birthday off. You’ll also benefit from an investment in cutting-edge technology that reflects our global ambition. But with a nimble, small-business feel that gives you the freedom to play, experiment and learn. And we don’t just talk about diversity and inclusion. We live it every day – with thriving networks including dh Gender Equality Network, dh Proud, dh Family, dh One and dh Thrive as the living proof. Everyone’s invited. Our approach to Flexible Working At dunnhumby, we value and respect difference and are committed to building an inclusive culture by creating an environment where you can balance a successful career with your commitments and interests outside of work. We believe that you will do your best at work if you have a work / life balance. Some roles lend themselves to flexible options more than others, so if this is important to you please raise this with your recruiter, as we are open to discussing agile working opportunities during the hiring process. For further information about how we collect and use your personal information please see our Privacy Notice which can be found (here What You Can Expect From Us We won’t just meet your expectations. We’ll defy them. So you’ll enjoy the comprehensive rewards package you’d expect from a leading technology company. But also, a degree of personal flexibility you might not expect. Plus, thoughtful perks, like flexible working hours and your birthday off. You’ll also benefit from an investment in cutting-edge technology that reflects our global ambition. But with a nimble, small-business feel that gives you the freedom to play, experiment and learn. And we don’t just talk about diversity and inclusion. We live it every day – with thriving networks including dh Gender Equality Network, dh Proud, dh Family, dh One and dh Thrive as the living proof. We want everyone to have the opportunity to shine and perform at your best throughout our recruitment process. Please let us know how we can make this process work best for you. For an informal and confidential chat please contact stephanie.winson@dunnhumby.com to discuss how we can meet your needs. Our approach to Flexible Working At dunnhumby, we value and respect difference and are committed to building an inclusive culture by creating an environment where you can balance a successful career with your commitments and interests outside of work. We believe that you will do your best at work if you have a work / life balance. Some roles lend themselves to flexible options more than others, so if this is important to you please raise this with your recruiter, as we are open to discussing agile working opportunities during the hiring process. For further information about how we collect and use your personal information please see our Privacy Notice which can be found (here) Show more Show less

Posted 2 months ago

Apply

5.0 - 8.0 years

5 - 9 Lacs

Kolkata

Work from Office

Role Purpose The purpose of this role is to design, develop and troubleshoot solutions/ designs/ models/ simulations on various softwares as per clients/ project requirements Do 1. Design and Develop solutions as per clients specifications Work on different softwares like CAD, CAE to develop appropriate models as per the project plan/ customer requirements Test the protype and designs produced on the softwares and check all the boundary conditions (impact analysis, stress analysis etc) Produce specifications and determine operational feasibility by integrating software components into a fully functional software system Create a prototype as per the engineering drawings & outline CAD model is prepared Perform failure effect mode analysis (FMEA) for any new requirements received from the client Provide optimized solutions to the client by running simulations in virtual environment Ensure software is updated with latest features to make it cost effective for the client Enhance applications/ solutions by identifying opportunities for improvement, making recommendations and designing and implementing systems Follow industry standard operating procedures for various processes and systems as per the client requirement while modeling a solution on the software 2. Provide customer support and problem solving from time to time Perform defect fixing raised by the client or software integration team while solving the tickets raised Develop software verification plans and quality assurance procedures for the customer Troubleshoot, debug and upgrade existing systems on time & with minimum latency and maximum efficiency Deploy programs and evaluate user feedback for adequate resolution with customer satisfaction Comply with project plans and industry standards 3. Ensure reporting & documentation for the client Ensure weekly, monthly status reports for the clients as per requirements Maintain documents and create a repository of all design changes, recommendations etc Maintain time-sheets for the clients Providing written knowledge transfer/ history of the project Deliver No. Performance Parameter Measure 1. Design and develop solutions Adherence to project plan/ schedule, 100% error free on boarding & implementation, throughput % 2. Quality & CSAT On-Time Delivery, minimum corrections, first time right, no major defects post production, 100% compliance of bi-directional traceability matrix, completion of assigned certifications for skill upgradation 3. MIS & Reporting 100% on time MIS & report generation Mandatory Skills: StreamSets. Experience5-8 Years.

Posted 2 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies