Home
Jobs

339 Mapreduce Jobs - Page 4

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Introduction A career in IBM Consulting is rooted by long-term relationships and close collaboration with clients across the globe. You'll work with visionaries across multiple industries to improve the hybrid cloud and AI journey for the most innovative and valuable companies in the world. Your ability to accelerate impact and make meaningful change for your clients is enabled by our strategic partner ecosystem and our robust technology platforms across the IBM portfolio; including Software and Red Hat. Curiosity and a constant quest for knowledge serve as the foundation to success in IBM Consulting. In your role, you'll be encouraged to challenge the norm, investigate ideas outside of your role, and come up with creative solutions resulting in ground breaking impact for a wide network of clients. Our culture of evolution and empathy centers on long-term career growth and development opportunities in an environment that embraces your unique skills and experience Your Role And Responsibilities As an Associate Software Developer at IBM, you'll work with clients to co-create solutions to major real-world challenges by using best practice technologies, tools, techniques, and products to translate system requirements into the design and development of customized systems Preferred Education Master's Degree Required Technical And Professional Expertise Spring Boot, Java2/EE, Microsservices - Hadoop Ecosystem (HBase, Hive, MapReduce, HDFS, Pig, Sqoop etc) Spark Good to have Python Preferred Technical And Professional Experience None Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Role: Data Engineer (Scala) Must Have Experience: 5+yrs Overall Exp, 3+ yrs Relevant Exp Must Have skills: Spark, SQL, Scala Spark, SQL, Pyspark Good To have : AWS, EMR, S3, Hadoop, Ctrl M. Key responsibilities (please specify if the position is an individual one or part of a team): 1) Should be able to design strategies and programs to collect, store, analyse and visualize data from various sources. 2) Should be able to develop big data solution recommendations and ensure implementation of the chosen big data solution. 3) Needs to be able to program, preferably in different programming/scripting languages such as Scala, Python, Java, Pig or SQL. 4) Proficient knowledge in Big data frameworks Spark, Map Reduce, 5) Should have an understanding of Hadoop, Hive, HBase, MongoDB and/or MapReduce. 6) Should also have experience with one of the large cloud-computing infrastructure solutions like Amazon Web Services or Elastic MapReduce. 7) Tuning the Spark Engine for high volume of data ( approx billion records) processing using BDM. 8) Troubleshoot data issues, deep dive into root cause analysis of any performance issue. Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

We are looking for energetic, high-performing and highly skilled Java + Big Data Engineers to help shape our technology and product roadmap. You will be part of the fast-paced, entrepreneurial Enterprise Personalization portfolio focused on delivering the next generation global marketing capabilities. This team is responsible for building products that power Merchant Offers personalization for Amex card members. Job Description: - Demonstrated leadership in designing sustainable software products, setting development standards, automated code review process, continuous build and rigorous testing etc - Ability to effectively lead and communicate across 3rd parties, technical and business product managers on solution design - Primary focus is spent writing code, API specs, conducting code reviews & testing in ongoing sprints or doing proof of concepts/automation tools - Applies visualization and other techniques to fast track concepts - Functions as a core member of an Agile team driving User story analysis & elaboration, design and development of software applications, testing & builds automation tools - Works on a specific platform/product or as part of a dynamic resource pool assigned to projects based on demand and business priority - Identifies opportunities to adopt innovative technologies Qualification: - Bachelor's degree in computer science, computer engineering, other technical discipline, or equivalent work experience - 5+ years of software development experience - 3-5 years of experience leading teams of engineers - Demonstrated experience with Agile or other rapid application development methods - Demonstrated experience with object-oriented design and coding - Demonstrated experience on these core technical skills (Mandatory) - Core Java, Spring Framework, Java EE - Hadoop Ecosystem (HBase, Hive, MapReduce, HDFS, Pig, Sqoop etc) - Spark - Relational Database (PostGreS / MySQL / DB2 etc) - Data Serialization techniques (Avro) - Cloud development (Micro-services) - Parallel & distributed (multi-tiered) systems - Application design, software development and automated testing - Demonstrated experience on these additional technical skills (Nice to Have) - Unix / Shell scripting - Python / Scala - Message Queuing, Stream processing (Kafka) - Elastic Search - AJAX tools/ Frameworks. - Web services , open API development, and REST concepts - Experience with implementing integrated automated release management using tools/technologies/frameworks like Maven, Git, code/security review tools, Jenkins, Automated testing and Junit. Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Role: Data Engineer (Scala) Must Have Experience: 5+yrs Overall Exp, 3+ yrs Relevant Exp Must Have skills: Spark, SQL, Scala Spark, SQL, Pyspark Good To have : AWS, EMR, S3, Hadoop, Ctrl M. Key responsibilities (please specify if the position is an individual one or part of a team): 1) Should be able to design strategies and programs to collect, store, analyse and visualize data from various sources. 2) Should be able to develop big data solution recommendations and ensure implementation of the chosen big data solution. 3) Needs to be able to program, preferably in different programming/scripting languages such as Scala, Python, Java, Pig or SQL. 4) Proficient knowledge in Big data frameworks Spark, Map Reduce, 5) Should have an understanding of Hadoop, Hive, HBase, MongoDB and/or MapReduce. 6) Should also have experience with one of the large cloud-computing infrastructure solutions like Amazon Web Services or Elastic MapReduce. 7) Tuning the Spark Engine for high volume of data ( approx billion records) processing using BDM. 8) Troubleshoot data issues, deep dive into root cause analysis of any performance issue. Show more Show less

Posted 1 week ago

Apply

14.0 years

1 - 8 Lacs

Hyderābād

On-site

Job Description: Lead Data Engineer Job Description About AT&T Chief Data Office The Chief Data Office (CDO) at AT&T is responsible for leveraging data as a strategic asset to drive business value. The team focuses on data governance, data engineering, artificial intelligence, and advanced analytics to enhance customer experience, optimize operations, and enable innovation. Candidates will: Work on cutting-edge Cloud Technologies, AI/ML, and data-driven solutions, be a part of a dynamic and innovative team driving digital transformation. Lead high-impact Agile initiatives with top talent in the industry. Get opportunity to grow and implement Agile at an enterprise level. Offered competitive compensation, flexible work culture, and learning opportunities. Shift timing (if any): 12.30 to 9.30 IST(Bangalore)/1:00-10:00 pm (Hyderabad) Work mode: Hybrid (3 days mandatory in office) Location / Additional Location (if any): Bangalore, Hyderabad Job Title / Advertise Job Title: Lead Data Engineer Roles and Responsibilities Create product roadmap and project plan. Design, develop, and maintain scalable ETL pipelines using Azure Services to process, transform, and load large datasets into Cloud platforms. Collaborate with cross-functional teams, including data architects, analysts, and business stakeholders, to gather data requirements and deliver efficient data solutions. Design, implement, and maintain data pipelines for data ingestion, processing, and transformation in Azure. Work together with data scientists/architects and analysts to understand the needs for data and create effective data workflows. Exposure to Snowflake Warehouse. Big Data Engineer with solid background with the larger Hadoop ecosystem and real-time analytics tools including PySpark/Scala-Spark/Hive/Hadoop CLI/MapReduce/Storm/Kafka/Lambda Architecture Implementing data validation and cleansing techniques. Improve the scalability, efficiency, and cost-effectiveness of data pipelines. Experience in designing and hands-on development in cloud-based analytics solutions. Expert level understanding on Azure Data Factory Azure Data Lake, Snowflake, Pyspark is required. Good to have exp in full Stack Development background with Java and JavaScript/CSS/HTML. Knowledge of ReactJs/Angular is a plus. Designing and building of data pipelines using API ingestion and Streaming ingestion methods. Unix/Linux expertise; comfortable with Linux operating system and Shell Scripting. Knowledge of Dev-Ops processes (including CI/CD) and Infrastructure as code is desirable. PL/SQL, RDBMS background with Oracle/MySQL Comfortable with microServices, CI/CD, Dockers, and Kubernetes Strong experience in common Data Vault data warehouse modelling principles. Creating/modifying Dockers and deploying them via Kubernetes. Additional Skills Required: The ideal candidate should have at least 14+ years of experience in IT along in addition to the following: Having 10+ years of extensive development experience using snowflake or similar data warehouse technology Having working experience with dbt and other technologies of the modern datastack, such as Snowflake, Azure, Databricks and Python, Experience in agile processes, such as SCRUM Extensive experience in writing advanced SQL statements and performance tuning. Experience in Data Ingestion techniques using custom or SAAS tool Experience in data modelling and can optimize existing/new data models Experience in data mining, data warehouse solutions, and ETL, and using databases in a business environment with large-scale, complex datasets Technical Qualifications: Preferred: Bachelor's degree in Computer Science, Information Systems, or a related field. Experience in high-tech, software, or telecom industries is a plus. Strong analytical skills to translate insights into impactful product initiatives. #DataEngineering Weekly Hours: 40 Time Type: Regular Location: Hyderabad, Andhra Pradesh, India It is the policy of AT&T to provide equal employment opportunity (EEO) to all persons regardless of age, color, national origin, citizenship status, physical or mental disability, race, religion, creed, gender, sex, sexual orientation, gender identity and/or expression, genetic information, marital status, status with regard to public assistance, veteran status, or any other characteristic protected by federal, state or local law. In addition, AT&T will provide reasonable accommodations for qualified individuals with disabilities. AT&T is a fair chance employer and does not initiate a background check until an offer is made. Job ID R-63188 Date posted 06/05/2025 Benefits Your needs? Met. Your wants? Considered. Take a look at our comprehensive benefits. Paid Time Off Tuition Assistance Insurance Options Discounts Training & Development

Posted 1 week ago

Apply

4.0 years

6 - 10 Lacs

Bengaluru

On-site

About us: As a Fortune 50 company with more than 400,000 team members worldwide, Target is an iconic brand and one of America's leading retailers. Joining Target means promoting a culture of mutual care and respect and striving to make the most meaningful and positive impact. Becoming a Target team member means joining a community that values different voices and lifts each other up. Here, we believe your unique perspective is important, and you'll build relationships by being authentic and respectful. Overview about TII At Target, we have a timeless purpose and a proven strategy. And that hasn’t happened by accident. Some of the best minds from different backgrounds come together at Target to redefine retail in an inclusive learning environment that values people and delivers world-class outcomes. That winning formula is especially apparent in Bengaluru, where Target in India operates as a fully integrated part of Target’s global team and has more than 4,000 team members supporting the company’s global strategy and operations. Position Overview: Working at Target means helping all families discover the joy of everyday life. We bring that vision to life through our values and culture. Learn more about Target here. As a lead engineer, you serve as the technical anchor for the engineering team that supports a product. You create, own and are responsible for the application architecture that best serves the product in its functional and non-functional needs. You identify and drive architectural changes to accelerate feature development or improve the quality of service (or both).You have deep and broad engineering skills and are capable of standing up an architecture in its whole on your own, but you choose to influence a wider team by acting as a “force multiplier”. Team Overview: IT Data Platform (ITDP) is the powerhouse data platform driving Target's tech efficiencies, seamlessly integrating operational and analytical needs. It fuels every facet of Target Tech, from boosting developer productivity and enhancing system intelligence to ensuring top-notch security and compliance. Target Tech builds the technology that makes Target the easiest, safest and most joyful place to shop and work. From digital to supply chain to cybersecurity, develop innovations that power the future of retail while relying on best-in-class data science algorithms that drive value. Target Tech is at the forefront of the industry, revolutionizing technology efficiency with cutting-edge data and AI. ITDP meticulously tracks tech data points across stores, multi-cloud environments, data centers, and distribution centers. IT Data Platform leverages advanced AI algorithms to analyze vast datasets, providing actionable insights that drive strategic decision-making. By integrating Generative AI, it enhances predictive analytics, enabling proactive solutions and optimizing operational efficiencies. Basic Qualifications: 4 years degree or equivalent experience 8+ years of industry experience in software design, development, and algorithm related solutions. 8+ years of experience in programming languages such as Java, Python, Scala. Hands on experience developing distributed systems, large scale systems, database and/or backend APIs. Demonstrates expertise in analysis and optimization of systems capacity, performance, and operational health Stays current with new and evolving technologies via formal training and self-directed education. Preferred Qualifications: • Experience Big Data tools and Hadoop Ecosystems. Like Apache Spark, Apache Iceberg, Kafka, ORC, MapReduce, Yarn, Hive, HDFS etc. Experience in architecting, building and running a large-scale system. Experience with industry, open-source projects and/or databases and/or large-data distributed systems. Key Responsibilities: Data Platform Management: Lead the design, implementation, and optimization of the Data Platform ensuring scalability and data correctness. Development: Oversee the development and maintenance of all core components of the platform. Unified APIs: Manage and create highly scalable APIs with GraphQL at enterprise scale. Platform Monitoring and Observability: Ensure monitoring solutions and security tools to ensure the integrity and trust in Data and APIs. Leadership and Mentorship: Provide technical leadership and mentorship to engineering teams, fostering a culture of collaboration and continuous improvement. Technology Design and Architecture: Articulate technology designs and architectural decisions to team members, ensuring alignment with business goals and technical standards. Useful Links- Life at Target- https://india.target.com/ Benefits- https://india.target.com/life-at-target/workplace/benefits Culture: https://india.target.com/life-at-target/belonging

Posted 1 week ago

Apply

5.0 years

3 Lacs

Bengaluru

On-site

This role is for one of our clients Industry: Technology, Information and Media Seniority level: Mid-Senior level Min Experience: 5 years Location: Bengaluru, India, Karnataka JobType: full-time We are seeking a Big Data Engineer with deep technical expertise to join our fast-paced, data-driven team. In this role, you will be responsible for designing and building robust, scalable, and high-performance data pipelines that fuel real-time analytics, business intelligence, and machine learning applications across the organization. If you thrive on working with large datasets, cutting-edge technologies, and solving complex data engineering challenges, this is the opportunity for you. What You’ll Do Design & Build Pipelines : Develop efficient, reliable, and scalable data pipelines that process large volumes of structured and unstructured data using big data tools. Distributed Data Processing : Leverage the Hadoop ecosystem (HDFS, Hive, MapReduce) to manage and transform massive datasets. Starburst (Trino) Integration : Design and optimize federated queries using Starburst, enabling seamless access across diverse data platforms. Databricks Lakehouse Development : Utilize Spark, Delta Lake, and MLflow on the Databricks Lakehouse Platform to enable unified analytics and AI workloads. Data Modeling & Architecture : Work with stakeholders to translate business requirements into flexible, scalable data models and architecture. Performance & Optimization : Monitor, troubleshoot, and fine-tune pipeline performance to ensure efficiency, reliability, and data integrity. Security & Compliance : Implement and enforce best practices for data privacy, security, and compliance with global regulations like GDPR and CCPA. Collaboration : Partner with data scientists, product teams, and business users to deliver impactful data solutions and improve decision-making. What You Bring Must-Have Skills 5+ years of hands-on experience in big data engineering, data platform development, or similar roles. Strong experience with Hadoop , including HDFS, Hive, HBase, and MapReduce. Deep understanding and practical use of Starburst (Trino) or Presto for large-scale querying. Hands-on experience with Databricks Lakehouse Platform , Spark, and Delta Lake. Proficient in SQL and programming languages like Python or Scala . Strong knowledge of data warehousing, ETL/ELT workflows, and schema design. Familiarity with CI/CD tools, version control (Git), and workflow orchestration tools (Airflow or similar). Nice-to-Have Skills Experience with cloud environments such as AWS , Azure , or GCP . Exposure to Docker , Kubernetes , or infrastructure-as-code tools. Understanding of data governance and metadata management platforms. Experience supporting AI/ML initiatives with curated datasets and pipelines.

Posted 1 week ago

Apply

12.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Role Description Experience Range: 12+ Years Hiring Locations: Chennai, Trivandrum, Kochi Role Description We are seeking an experienced Data Architect with a robust background in SQL, T-SQL, data modeling, and cloud data solutions to lead the design and implementation of enterprise data strategies. The ideal candidate will have hands-on experience in the Health Payer domain , with a preference for familiarity with FACETS or similar platforms . This role combines deep technical expertise with leadership, innovation, and stakeholder management. Responsibilities Data Architecture & Strategy Design scalable, secure, and high-performance data architectures. Lead development of long-term data strategy and short-term tactical data solutions. Define and implement governance frameworks, metadata accuracy protocols, and regulatory compliance measures. Data Modeling & Optimization Develop logical and physical data models across systems and platforms. Perform gap analysis and align architecture to business and technical goals. Define systems/subsystems that support program goals. Cloud & Infrastructure Deploy and optimize data tools in AWS, Azure, or GCP. Collaborate with DevOps/Cloud teams to ensure performance, scalability, and cost-efficiency. Team Leadership & Mentoring Lead and mentor a team of 15+ engineers. Facilitate onboarding, training, and skill-building. Drive solution architecture best practices. Stakeholder & Project Management Collaborate with business owners, architects, and cross-functional teams. Define NFRs, evaluate trade-offs, and support project estimation and planning. Identify technical risks and develop mitigation strategies. Innovation & Thought Leadership Participate in technical forums and share knowledge across teams. Explore new tools and frameworks, and contribute to IP/reusable components. Lead PoC development and beta testing of new service offerings. Operational Excellence Automate and optimize data workflows. Document and track architectural decisions. Evaluate solutions through audits and performance metrics. Mandatory Skills 12+ years in IT with at least 3 years as a Data Architect. Expert-level in SQL, T-SQL, and relational database systems. 3+ years of hands-on data modeling and database design. Strong understanding of ETL processes, data governance, and data integration frameworks. Experience in cloud platforms: AWS, Azure, or GCP. Knowledge of data warehouse, Hadoop, data analytics, and transformation tools. Certification in Big Data/Architect track (AWS/Azure/GCP). Good To Have Skills Experience in the Health Payer domain (FACETS preferred). Knowledge of Hadoop technologies (Hive, Pig, MapReduce). Exposure to data visualization, streaming, and NoSQL databases. Proficiency in Python, Java, and tools like PowerPoint, Visio. Experience with UNIX, Windows, and backup/archival software. Soft Skills Strong analytical and problem-solving abilities. Creativity in solution development. Attention to detail and quality-focused mindset. Excellent communication and stakeholder management skills. High resilience and self-learning capability. Leadership, mentoring, and performance management skills. Skills Solution Architecture,Mysql,Database,Healthcare Show more Show less

Posted 1 week ago

Apply

7.0 years

0 Lacs

Pune, Maharashtra, India

Remote

Linkedin logo

Position : Lead Data Engineer Experience : 7+ Years Location : Remote Summary We are looking for a Lead Data Engineer responsible for ETL processes and documentation in building scalable data warehouses and analytics capabilities. This role involves maintaining existing systems, developing new features, and implementing performance improvements. Key Responsibilities Build ETL pipelines using Fivetran and dbt for internal and client projects across platforms like Azure , Salesforce , and AWS . Monitor active production ETL jobs. Create and maintain data lineage documentation to ensure complete system traceability. Develop design/mapping documents for clear and testable development, QA, and UAT. Evaluate and implement new data integration tools based on current and future requirements. Identify and eliminate process redundancies to streamline data operations. Work with the Data Quality Analyst to implement validation checks across ETL jobs. Design and implement large-scale data warehouses , BI solutions, and Master Data Management (MDM) systems, including Data Lakes/Data Vaults . Required Skills & Qualifications Bachelor's degree in Computer Science, Software Engineering, Math, or a related field. 6+ years of experience in data engineering, business analytics, or software development. 5+ years of experience with strong SQL development skills . Hands-on experience in Snowflake and Azure Data Factory (ADF) . Proficient in ETL toolsets such as Informatica , Talend , dbt , and ADF . Experience with PHI/PII data and working in the healthcare domain is preferred. Strong analytical and critical thinking skills. Excellent written and verbal communication. Ability to manage time and prioritize tasks effectively. Familiarity with scripting and open-source platforms (e.g., Python, Java, Linux, Apache, Chef ). Experience with BI tools like Power BI , Tableau , or Cognos . Exposure to Big Data technologies : Snowflake (Snowpark) , Apache Spark , Hadoop , Hive , Sqoop , Pig , Flume , HBase , MapReduce . Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Join us as a Software Engineer This is an opportunity for a driven Software Engineer to take on an exciting new career challenge Day-to-day, you'll be engineering and maintaining innovative, customer centric, high performance, secure and robust solutions It’s a chance to hone your existing technical skills and advance your career while building a wide network of stakeholders We're offering this role at associate level What you'll do In your new role, you’ll be working within a feature team to engineer software, scripts and tools, as well as liaising with other engineers, architects and business analysts across the platform. You’ll Also Be Producing complex and critical software rapidly and of high quality which adds value to the business Working in permanent teams who are responsible for the full life cycle, from initial development, through enhancement and maintenance to replacement or decommissioning Collaborating to optimise our software engineering capability Designing, producing, testing and implementing our working software solutions Working across the life cycle, from requirements analysis and design, through coding to testing, deployment and operations The skills you'll need To take on this role, you’ll need at least four years of experience in software engineering, software design, and architecture, and an understanding of how your area of expertise supports our customers. You’ll Also Need Experience of working with development and testing tools, bug tracking tools and wikis Experience in AWS native services particularly S3, Glue, Lambda, IAM, and Elastic MapReduce Strong proficiency in Terraform for AWS cloud, Python for developing AWS lambdas, Airflow DAGs and shell scripting Experience with Apache Airflow for workflow orchestration Experience of DevOps and Agile methodology and associated toolsets Show more Show less

Posted 1 week ago

Apply

8.0 - 12.0 years

10 - 15 Lacs

Bengaluru

Work from Office

Naukri logo

Lead Data Analyst About us: As a Fortune 50 company with more than 400,000 team members worldwide, Target is an iconic brand and one of America's leading retailers. Joining Target means promoting a culture of mutual care and respect and striving to make the most meaningful and positive impact. Becoming a Target team member means joining a community that values different voices and lifts each other up. Here, we believe your unique perspective is important, and you'll build relationships by being authentic and respectful. Overview about TII At Target, we have a timeless purpose and a proven strategy. And that hasn t happened by accident. Some of the best minds from different backgrounds come together at Target to redefine retail in an inclusive learning environment that values people and delivers world-class outcomes. That winning formula is especially apparent in Bengaluru, where Target in India operates as a fully integrated part of Target s global team and has more than 4,000 team members supporting the company s global strategy and operations. As a part of the Merchandising Analytics and Insights team, our analysts work closely with business owners as well as technology and data product teams staffed with product owners and engineers. They support all Merchandising strategic initiatives with data, reporting and analysis. Merchandising teams rely on this team of analysts to bring data to support decision making. PRINCIPAL DUTIES AND RESPONSIBILITIES As a Lead Data Analyst your responsibilities will be exploring data, technologies, and the application of mathematical techniques to derive business insights. Data analysts spend their time determining the best approach to gather, model, manipulate, analyze and present data. You will lead an agile team which requires active participation in ceremonies and team meetings. In this role, you ll have opportunities to continuously upskill to stay current with new technologies in the industry via formal training, peer training groups and self-directed education. This role specifically will support a new initiatives requiring new data and metric development. Your curiosity and ability to roll up guest level insights will be critical. The team will leveraging in store analytic data to measure the impact of promotions, product placement, or physical changes at any given store has on store operations, guest experience and purchase decisions. This new capability are still being defined, which allows for creative thinking and leadership opportunities. Job duties may change at any time due to business needs. Key Responsibilities *: Work with MC, Merch, Planning, Marketing, Digital, etc. teams to use data to identify opportunities, bridge gaps, and build advanced capabilities Partner with Product, DS, DE, etc. to determine the best approach to gather, model, manipulate, analyze and present data Develop data and metrics to support key business strategies, initiatives, and decisions Explore data, technologies, and the application of mathematical techniques to derive business insights Desired Skills & Experiences *: Ability to breakdown complex problems, identify root cause of the issue, and develop sustainable solutions Ability to influence cross-functional teams and partners at multiple levels of the organization Possess analytical skills (SQL, Python, R, etc.) to find, manipulate, and present data in meaningful ways to clients Desire to continuously upskill to stay current with new technologies in the industry via formal training, peer training groups and self-directed education About you (In terms of technical) B.E/B.Tech, M.Tech, M.Sc. , MCA - Overall 8-12 years of experience. And 6-8 years data ecosystem experience. Strong Architect of data capabilities and analytical tools. Proven experience to architect enterprise level Datawarehouse solutions and BI Implementations across Domo, Tableau & other Visualization tools. Provide expertise and ability to train and guide team to implement top design architectures to build next generation analytics Deep Big data experience. Should have solid experience in Hadoop ecosystem and its components around writing programs using Map-Reduce, experience in developing Hive and PySpark SQL and designing and developing Oozie workflows. Hands on experience in object oriented or functional programming such as Scala &/or Python/R or other open-source languages Strong foundational mathematics and statistics Experience in analytical techniques like Linear & Non-Linear Regression, Logistic Regression, Time-series models, Classification Techniques etc (In terms of soft skills for lead role) Strong stakeholder management with product teams and business leaders. Has strong problem solving, analytical skills and ability to manage ambiguity. Ability to communicate results of complex analytic findings to both technical /non-technical audiences and business leaders. Ability to lead change, work through conflict and setbacks . Experience working in an agile environment (stories, backlog refinement, sprints, etc.). Excellent attention to detail and timelines. Strong sense of ownership. Desire to continuously upskill to stay current with new technologies in the industry via formal training, peer training groups and self-directed education Useful Links- Life at Target- https://india.target.com/ Benefits- https://india.target.com/life-at-target/workplace/benefits Culture- https://india.target.com/life-at-target/belonging

Posted 1 week ago

Apply

4.0 - 9.0 years

9 - 14 Lacs

Bengaluru

Work from Office

Naukri logo

About us: As a Fortune 50 company with more than 400,000 team members worldwide, Target is an iconic brand and one of America's leading retailers. At Target, we have a timeless purpose and a proven strategy and that hasn t happened by accident. Some of the best minds from diverse backgrounds come together at Target to redefine retail in an inclusive learning environment that values people and delivers world-class outcomes. That winning formula is especially apparent in Bengaluru, where Target in India operates as a fully integrated part of Target s global team and has more than 4,000 team members supporting the company s global strategy and operations. Joining Target means promoting a culture of mutual care and respect and striving to make the most meaningful and positive impact. Becoming a Target team member means joining a community that values diverse backgrounds. We believe your unique perspective is important, and you'll build relationships by being authentic and respectful. At Target, inclusion is part of the core value. We aim to create equitable experiences for all, regardless of their dimensions of difference. As an equal opportunity employer, Target provides diverse opportunities for everyone to grow and win About us Working at Target means helping all families discover the joy of everyday life. We bring that vision to life through our values and culture. Learn more about Target here . As a Senior Engineer, you serve as the technical anchor for the engineering team that supports a product. You create, own and are responsible for the application architecture that best serves the product in its functional and non-functional needs. You identify and drive architectural changes to accelerate feature development or improve the quality of service (or both). You have deep and broad engineering skills and are capable of standing up an architecture in its whole on your own, but you choose to influence a wider team by acting as a force multiplier . About this Team IT Data Platform (ITDP) is the powerhouse data platform driving Target's tech efficiencies, seamlessly integrating operational and analytical needs. It fuels every facet of Target Tech, from boosting developer productivity and enhancing system intelligence to ensuring top-notch security and compliance. Target Tech builds the technology that makes Target the easiest, safest and most joyful place to shop and work. From digital to supply chain to cybersecurity, develop innovations that power the future of retail while relying on best-in-class data science algorithms that drive value. Target Tech is at the forefront of the industry, revolutionizing technology efficiency with cutting-edge data and AI. ITDP meticulously tracks tech data points across stores, multi-cloud environments, data centers, and distribution centers. IT Data Platform leverages advanced AI algorithms to analyze vast datasets, providing actionable insights that drive strategic decision-making. By integrating Generative AI, it enhances predictive analytics, enabling proactive solutions and optimizing operational efficiencies. Basic Qualifications 4 years degree or equivalent experience 3+ years of industry experience in software design, development, and algorithm related solutions. 3+ years of experience in programming languages such as Java, Python, Scala. Hands on experience developing distributed systems, large scale systems, database and/or backend APIs. Demonstrates experience in analysis and optimization of systems capacity, performance, and operational health Preferred Qualifications Experience Big Data tools and Hadoop Ecosystems. Like Apache Spark, Apache Iceberg, Kafka, ORC, MapReduce, Yarn, Hive, HDFS etc. Experience in developing and running a large-scale system. Experience with industry, open-source projects and/or databases and/or large-data distributed systems. Key Responsibilities Data Platform ManagementDesign, implementation, and optimization of the Data Platform ensuring scalability and data correctness. DevelopmentOversee the development and maintenance of all core components of the platform. Unified APIsImplementation of highly scalable APIs with GraphQL/REST at enterprise scale. Platform Monitoring and ObservabilityEnsure monitoring solutions and security tools to ensure the integrity and trust in Data and APIs. Leadership and MentorshipProvide technical leadership and mentorship to junior engineers, fostering a culture of collaboration and continuous improvement. Useful Links- Life at Target- https://india.target.com/ Benefits- https://india.target.com/life-at-target/workplace/benefits Culture- https://india.target.com/life-at-target/diversity-and-inclusion

Posted 1 week ago

Apply

6.0 - 10.0 years

3 - 8 Lacs

Pune

Work from Office

Naukri logo

Job Information Job Opening ID ZR_1671_JOB Date Opened 20/12/2022 Industry Technology Job Type Work Experience 6-10 years Job Title Oracle Warehouse Builder/Developer City Pune Province Maharashtra Country India Postal Code 411001 Number of Positions 4 Roles & Responsibilities: Oracle Warehouse Builder, OWB, Oracle Workflow Builder, Oracle TBSS Oracle Warehouse Builder 9i (Client Version 9.0.2.62.3/Repository Version 9.0.2.0.0) Oracle Warehouse Builder 4 Oracle Workflow Builder 2.6.2 Oracle Database 10gTNS for IBM/AIX RISC System/6000Version 10.2.0.5.0 - Production More than 5 years experience on Oracle Warehouse Builder (OWB) and Oracle Workflow Builder Expert Knowledge on Oracle PL/SQL to develop individual code objects to entire DataMarts. Scheduling tools Oracle TBSS (DBMS_SCHEDULER jobs to create and run) and trigger based for file sources based on control files. Must have design and development experience in data pipeline solutions from different source systems (FILES, Oracle) to data lakes. Must have involved in creating/designing Hive tables and loading analyzing data using hive queries. Must have knowledge in CA Workload Automation DE 12.2 to create jobs and scheduling. Extensive knowledge on entire life cycle of Change/Incident/Problem management by using ServiceNow. Oracle Warehouse Builder 9i (Client Version 9.0.2.62.3/Repository Version 9.0.2.0.0). Oracle Warehouse Builder 4 Oracle Workflow Builder 2.6.2 Oracle Database 10gTNS for IBM/AIX RISC System/6000Version 10.2.0.5.0 - Production. Oracle Enterprise Manager 10gR1.(Monitoring jobs and tablespaces utilization) Extensive knowledge in fetching Mainframe Cobol files(ASCII AND EBSDIC formats) to the landing area and processing(formatting) and loading(Error handling) of these files to oracle tables by using SQL*Loader and External tables. Extensive knowledge in Oracle Forms 6 to integrate with OWB 4. Extensive knowledge on entire life cycle of Change/Incident/Problem management by using Service-Now. work closely with the Business owner teams and Functional/Data analysts in the entire development/BAU process. Work closely with AIX support, DBA support teams for access privileges and storage issues etc. work closely with the Batch Operations team and MFT teams for file transfer issues. Migration of Oracle to Hadoop eco system: Must have working experience in Hadoop eco system elements like HDFS,MapReduce,YARN etc. Must have working knowledge on Scala & Spark Dataframes to convert the existing code to Hadoop data lakes. Must have design and development experience in data pipeline solutions from different source systems (FILES, Oracle) to data lakes. Must have involved in creating/designing Hive tables and loading analyzing data using hive queries. Must have knowledge in creating Hive partitions, Dynamic partitions and buckets. Must have knowledge in CA Workload Automation DE 12.2 to create jobs and scheduling. Use Denodo for Data virtualization to the required data access for end users. check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#2B39C2;border-color:#2B39C2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> I'm interested

Posted 1 week ago

Apply

12.0 - 15.0 years

13 - 17 Lacs

Mumbai

Work from Office

Naukri logo

Job Information Job Opening ID ZR_1688_JOB Date Opened 24/12/2022 Industry Technology Job Type Work Experience 12-15 years Job Title Big Data Architect City Mumbai Province Maharashtra Country India Postal Code 400008 Number of Positions 4 LocationMumbai, Pune, Chennai, Hyderabad, Coimbatore, Kolkata 12+ Years experience in Big data Space across Architecture, Design, Development, testing & Deployment, full understanding in SDLC. 1. Experience of Hadoop and related technology stack experience 2. Experience of the Hadoop Eco-system(HDP+CDP) / Big Data (especially HIVE) Hand on experience with programming languages such as Java/Scala/python Hand-on experience/knowledge on Spark3. Being responsible and focusing on uptime and reliable running of all or ingestion/ETL jobs4. Good SQL and used to work in a Unix/Linux environment is a must.5. Create and maintain optimal data pipeline architecture.6. Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.7. Good to have cloud experience8. Good to have experience for Hadoop integration with data visualization tools like PowerBI. check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#2B39C2;border-color:#2B39C2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> I'm interested

Posted 1 week ago

Apply

6.0 - 10.0 years

3 - 7 Lacs

Chennai

Work from Office

Naukri logo

Job Information Job Opening ID ZR_2199_JOB Date Opened 15/04/2024 Industry Technology Job Type Work Experience 6-10 years Job Title Sr Data Engineer City Chennai Province Tamil Nadu Country India Postal Code 600004 Number of Positions 4 Strong experience in Python Good experience in Databricks Experience working in AWS/Azure Cloud Platform. Experience working with REST APIs and services, messaging and event technologies. Experience with ETL or building Data Pipeline tools Experience with streaming platforms such as Kafka. Demonstrated experience working with large and complex data sets. Ability to document data pipeline architecture and design Experience in Airflow is nice to have To build complex Deltalake check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#2B39C2;border-color:#2B39C2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> I'm interested

Posted 1 week ago

Apply

5.0 - 8.0 years

2 - 6 Lacs

Bengaluru

Work from Office

Naukri logo

Job Information Job Opening ID ZR_1628_JOB Date Opened 09/12/2022 Industry Technology Job Type Work Experience 5-8 years Job Title Data Engineer City Bangalore Province Karnataka Country India Postal Code 560001 Number of Positions 4 Roles and Responsibilities: 4+ years of experience as a data developer using Python Knowledge in Spark, PySpark preferable but not mandatory Azure Cloud experience (preferred) Alternate Cloud experience is fine preferred experience in Azure platform including Azure data Lake, data Bricks, data Factory Working Knowledge on different file formats such as JSON, Parquet, CSV, etc. Familiarity with data encryption, data masking Database experience in SQL Server is preferable preferred experience in NoSQL databases like MongoDB Team player, reliable, self-motivated, and self-disciplined check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#2B39C2;border-color:#2B39C2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> I'm interested

Posted 1 week ago

Apply

1.0 years

4 - 6 Lacs

Hyderābād

On-site

- 1+ years of data engineering experience - Experience with SQL - Experience with data modeling, warehousing and building ETL pipelines - Experience with one or more query language (e.g., SQL, PL/SQL, DDL, MDX, HiveQL, SparkSQL, Scala) - Experience with one or more scripting language (e.g., Python, KornShell) Do you want to be a leader in the team that takes Transportation and Retail models to the next generation? Do you have a solid analytical thinking, metrics driven decision making and want to solve problems with solutions that will meet the growing worldwide need? Then Transportation is the team for you. We are looking for top notch Data Engineers to be part of our world class Business Intelligence for Transportation team. • 4-7 years of experience performing quantitative analysis, preferably for an Internet or Technology company • Strong experience in Data Warehouse and Business Intelligence application development • Data Analysis: Understand business processes, logical data models and relational database implementations • Expert knowledge in SQL. Optimize complex queries. • Basic understanding of statistical analysis. Experience in testing design and measurement. • Able to execute research projects, and generate practical results and recommendations • Proven track record of working on complex modular projects, and assuming a leading role in such projects • Highly motivated, self-driven, capable of defining own design and test scenarios • Experience with scripting languages, i.e. Perl, Python etc. preferred • BS/MS degree in Computer Science • Evaluate and implement various big-data technologies and solutions (Redshift, Hive/EMR, Tez, Spark) to optimize processing of extremely large datasets in an accurate and timely fashion. Experience with large scale data processing, data structure optimization and scalability of algorithms a plus Key job responsibilities 1. Responsible for designing, building and maintaining complex data solutions for Amazon's Operations businesses 2. Actively participates in the code review process, design discussions, team planning, operational excellence, and constructively identifies problems and proposes solutions 3. Makes appropriate trade-offs, re-use where possible, and is judicious about introducing dependencies 4. Makes efficient use of resources (e.g., system hardware, data storage, query optimization, AWS infrastructure etc.) 5. Knows about recent advances in distributed systems (e.g., MapReduce, MPP Architectures, External Partitioning) 6. Asks correct questions when data model and requirements are not well defined and comes up with designs which are scalable, maintainable and efficient 7. Makes enhancements that improve team’s data architecture, making it better and easier to maintain (e.g., data auditing solutions, automating, ad-hoc or manual operation steps) 8. Owns the data quality of important datasets and any new changes/enhancements Experience with big data technologies such as: Hadoop, Hive, Spark, EMR Experience with any ETL tool like, Informatica, ODI, SSIS, BODI, Datastage, etc. Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.

Posted 1 week ago

Apply

6.0 - 12.0 years

8 - 10 Lacs

Chennai

On-site

Job Description: About Us At Bank of America, we are guided by a common purpose to help make financial lives better through the power of every connection. Responsible Growth is how we run our company and how we deliver for our clients, teammates, communities, and shareholders every day. One of the keys to driving Responsible Growth is being a great place to work for our teammates around the world. We’re devoted to being a diverse and inclusive workplace for everyone. We hire individuals with a broad range of backgrounds and experiences and invest heavily in our teammates and their families by offering competitive benefits to support their physical, emotional, and financial well-being. Bank of America believes both in the importance of working together and offering flexibility to our employees. We use a multi-faceted approach for flexibility, depending on the various roles in our organization. Working at Bank of America will give you a great career with opportunities to learn, grow and make an impact, along with the power to make a difference. Join us! Global Business Services Global Business Services delivers Technology and Operations capabilities to Lines of Business and Staff Support Functions of Bank of America through a centrally managed, globally integrated delivery model and globally resilient operations. Global Business Services is recognized for flawless execution, sound risk management, operational resiliency, operational excellence and innovation. In India, we are present in five locations and operate as BA Continuum India Private Limited (BACI), a non-banking subsidiary of Bank of America Corporation and the operating company for India operations of Global Business Services. Process Overview* The Analytics and Intelligence Engine (AIE) team transforms analytical and operational data into Consumer and Wealth Client insights and enables personalization opportunities that are provided to Associate and Customer-facing operational applications. The Big data technologies used in this are Hadoop /PySpark / Scala, HQL as ETL, Unix as file Landing environment, and real time (or near real time) streaming applications. Job Description* We are actively seeking a talented and motivated Senior Hadoop Developer/ Lead to join our dynamic and energetic team. As a key contributor to our agile scrum teams, you will collaborate closely with the Insights division. We are looking for a candidate who can showcase strong technical expertise in Hadoop and related technologies, and who excels at collaborating with both onshore and offshore team members. The role requires both hands-on coding and collaboration with stakeholders to drive strategic design decisions. While functioning as an individual contributor for one or more teams, the Senior Hadoop Data Engineer may also have the opportunity to lead and take responsibility for end-to-end solution design and delivery, based on the scale of implementation and required skillsets. Responsibilities* Develop high-performance and scalable solutions for Insights, using the Big Data platform to facilitate the collection, storage, and analysis of massive data sets from multiple channels. Utilize your in-depth knowledge of Hadoop stack and storage technologies, including HDFS, Spark, Scala, MapReduce, Yarn, Hive, Sqoop, Impala, Hue, and Oozie, to design and optimize data processing workflows. Implement Near real-time and Streaming data solutions to provide up-to-date information to millions of Bank customers using Spark Streaming, Kafka. Collaborate with cross-functional teams to identify system bottlenecks, benchmark performance, and propose innovative solutions to enhance system efficiency. Take ownership of defining Big Data strategies and roadmaps for the Enterprise, aligning them with business objectives. Apply your expertise in NoSQL technologies like MongoDB, SingleStore, or HBase to efficiently handle diverse data types and storage requirements. Stay abreast of emerging technologies and industry trends related to Big Data, continuously evaluating new tools and frameworks for potential integration. Provide guidance and mentorship to junior teammates. Requirements* Education* Graduation / Post Graduation: BE/B.Tech/MCA Certifications If Any: NA Experience Range* 6 to 12 Years Foundational Skills* Minimum of 7 years of industry experience, with at least 5 years focused on hands-on work in the Big Data domain. Highly skilled in Hadoop stack technologies, such as HDFS, Spark, Hive, Yarn, Sqoop, Impala and Hue. Strong proficiency in programming languages such as Python, Scala, and Bash/Shell Scripting. Excellent problem-solving abilities and the capability to deliver effective solutions for business-critical applications. Strong command of Visual Analytics Tools, with a focus on Tableau. Desired Skills* Experience in Real-time streaming technologies like Spark Streaming, Kafka, Flink, or Storm. Proficiency in NoSQL technologies like HBase, MongoDB, SingleStore, etc. Familiarity with Cloud Technologies such as Azure, AWS, or GCP. Working knowledge of machine learning algorithms, statistical analysis, and programming languages (Python or R) to conduct data analysis and develop predictive models to uncover valuable patterns and trends. Proficiency in Data Integration and Data Security within the Hadoop ecosystem, including knowledge of Kerberos. Work Timings* 12:00 PM to 09.00 PM IST. Job Location* Chennai, Mumbai

Posted 1 week ago

Apply

2.0 - 4.0 years

2 - 3 Lacs

Chennai

On-site

The Data Science Analyst 2 is a developing professional role. Applies specialty area knowledge in monitoring, assessing, analyzing and/or evaluating processes and data. Identifies policy gaps and formulates policies. Interprets data and makes recommendations. Researches and interprets factual information. Identifies inconsistencies in data or results, defines business issues and formulates recommendations on policies, procedures or practices. Integrates established disciplinary knowledge within own specialty area with basic understanding of related industry practices. Good understanding of how the team interacts with others in accomplishing the objectives of the area. Develops working knowledge of industry practices and standards. Limited but direct impact on the business through the quality of the tasks/services provided. Impact of the job holder is restricted to own team. Responsibilities: The Data Engineer is responsible for building Data Engineering Solutions using next generation data techniques. The individual will be working with tech leads, product owners, customers and technologists to deliver data products/solutions in a collaborative and agile environment. Responsible for design and development of big data solutions. Partner with domain experts, product managers, analyst, and data scientists to develop Big Data pipelines in Hadoop Responsible for moving all legacy workloads to cloud platform Work with data scientist to build Client pipelines using heterogeneous sources and provide engineering services for data science applications Ensure automation through CI/CD across platforms both in cloud and on-premises Define needs around maintainability, testability, performance, security, quality and usability for data platform Drive implementation, consistent patterns, reusable components, and coding standards for data engineering processes Convert SAS based pipelines into languages like PySpark, Scala to execute on Hadoop, Snowflake and non-Hadoop ecosystems Tune Big data applications on Hadoop, Cloud and non-Hadoop platforms for optimal performance Applies in-depth understanding of how data analytics collectively integrate within the sub-function as well as coordinates and contributes to the objectives of the entire function. Produces detailed analysis of issues where the best course of action is not evident from the information available, but actions must be recommended/taken. Appropriately assess risk when business decisions are made, demonstrating particular consideration for the firm's reputation and safeguarding Citigroup, its clients and assets, by driving compliance with applicable laws, rules and regulations, adhering to Policy, applying sound ethical judgment regarding personal behavior, conduct and business practices, and escalating, managing and reporting control issues with transparency. Qualifications: 2-4 years of total IT experience Experience with Hadoop (Cloudera)/big data technologies /Cloud/AI tools Hands-on experience with HDFS, MapReduce, Hive, Impala, Spark, Kafka, Kudu, Kubernetes, Dashboard tools, Snowflake builts, AWS tools, AI/ML libraries and tools, etc) Experience on designing and developing Data Pipelines for Data Ingestion or Transformation. System level understanding - Data structures, algorithms, distributed storage & compute tools, SQL expertise, Shell scripting, Schedule tools, Scrum/Agile methodologies. Can-do attitude on solving complex business problems, good interpersonal and teamwork skills Education: Bachelor’s/University degree or equivalent experience This job description provides a high-level review of the types of work performed. Other job-related duties may be assigned as required. - Job Family Group: Technology - Job Family: Data Science - Time Type: Full time - Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster.

Posted 1 week ago

Apply

7.0 years

0 Lacs

Jaipur, Rajasthan, India

On-site

Linkedin logo

About Hakkoda Hakkoda, an IBM Company, is a modern data consultancy that empowers data driven organizations to realize the full value of the Snowflake Data Cloud. We provide consulting and managed services in data architecture, data engineering, analytics and data science. We are renowned for bringing our clients deep expertise, being easy to work with, and being an amazing place to work! We are looking for curious and creative individuals who want to be part of a fast-paced, dynamic environment, where everyone’s input and efforts are valued. We hire outstanding individuals and give them the opportunity to thrive in a collaborative atmosphere that values learning, growth, and hard work. Our team is distributed across North America, Latin America, India and Europe. If you have the desire to be a part of an exciting, challenging, and rapidly-growing Snowflake consulting services company, and if you are passionate about making a difference in this world, we would love to talk to you!. As an AWS Managed Services Architect, you will play a pivotal role in architecting and optimizing the infrastructure and operations of a complex Data Lake environment for BOT clients. You’ll leverage your strong expertise with AWS services to design, implement, and maintain scalable and secure data solutions while driving best practices. You will work collaboratively with delivery teams across the U.S., Costa Rica, Portugal, and other regions, ensuring a robust and seamless Data Lake architecture. In addition, you’llproactively engage with clients to support their evolving needs, oversee critical AWS infrastructure, and guide teams toward innovative and efficient solutions. This role demands a hands-on approach, including designing solutions, troubleshooting,optimizing performance, and maintaining operational excellence. Role Description AWS Data Lake Architecture: Design, build, and support scalable, high-performance architectures for complex AWS Data Lake solutions. AWS Services Expertise: Deploy and manage cloud-native solutions using a wide range of AWS services, including but not limited to- Amazon EMR (Elastic MapReduce): Optimize and maintain EMR clusters for large-scale big data processing. AWS Batch: Design and implement efficient workflows for batch processing workloads. Amazon SageMaker: Enable data science teams with scalable infrastructure for model training and deployment. AWS Glue: Develop ETL/ELT pipelines using Glue to ensure efficient data ingestion and transformation. AWS Lambda: Build serverless functions to automate processes and handle event-driven workloads. IAM Policies: Define and enforce fine-grained access controls to secure cloud resources and maintain governance. AWS IoT & Timestream: Design scalable solutions for collecting, storing, and analyzing time-series data. Amazon DynamoDB: Build and optimize high-performance NoSQL database solutions. Data Governance & Security: Implement best practices to ensure data privacy, compliance, and governance across the data architecture. Performance Optimization: Monitor, analyze, and tune AWS resources for performance efficiency and cost optimization. Develop and manage Infrastructure as Code (IaC) using AWS CloudFormation, Terraform, or equivalent tools to automate infrastructure deployment. Client Collaboration: Work closely with stakeholders to understand business objectives and ensure solutions align with client needs. Team Leadership & Mentorship: Provide technical guidance to delivery teams through design reviews, troubleshooting, and strategic planning. Continuous Innovation: Stay current with AWS service updates, industry trends, and emerging technologies to enhance solution delivery. Documentation & Knowledge Sharing: Create and maintain architecture diagrams, SOPs, and internal/external documentation to support ongoing operations and collaboration. Qualifications 7+ years of hands-on experience in cloud architecture and infrastructure (preferably AWS). 3+ years of experience specifically in architecting and managing Data Lake or big datadata solutions on AWS. Bachelor’s Degree in Computer Science, Information Systems, or a related field (preferred) AWS Certifications such as Solutions Architect Professional or Big Data Specialty. Experience with Snowflake, Matillion, or Fivetran in hybrid cloud environments. Familiarity with Azure or GCP cloud platforms. Understanding of machine learning pipelines and workflows. Technical Skills: Expertise in AWS services such as EMR, Batch, SageMaker, Glue, Lambda,IAM, IoT TimeStream, DynamoDB, and more. Strong programming skills in Python for scripting and automation. Proficiency in SQL and performance tuning for data pipelines and queries. Experience with IaC tools like Terraform or CloudFormation. Knowledge of big data frameworks such as Apache Spark, Hadoop, or similar. Data Governance & Security: Proven ability to design and implement secure solutions, with strong knowledge of IAM policies and compliance standards. Problem-Solving:Analytical and problem-solving mindset to resolve complex technical challenges. Collaboration:Exceptional communication skills to engage with technical and non-technicalstakeholders. Ability to lead cross-functional teams and provide mentorship. Benefits Health Insurance Paid leave Technical training and certifications Robust learning and development opportunities Incentive Toastmasters Food Program Fitness Program Referral Bonus Program Hakkoda is committed to fostering diversity, equity, and inclusion within our teams. A diverse workforce enhances our ability to serve clients and enriches our culture. We encourage candidates of all races, genders, sexual orientations, abilities, and experiences to apply, creating a workplace where everyone can succeed and thrive. Ready to take your career to the next level? 🚀 💻 Apply today👇 and join a team that’s shaping the future!! Hakkoda is an IBM subsidiary which has been acquired by IBM and will be integrated in the IBM organization. Hakkoda will be the hiring entity. By Proceeding with this application, you understand that Hakkoda will share your personal information with other IBM subsidiaries involved in your recruitment process, wherever these are located. More information on how IBM protects your personal information, including the safeguards in case of cross-border data transfer, are available here. Show more Show less

Posted 1 week ago

Apply

6.0 years

0 Lacs

Hyderabad, Telangana, India

Remote

Linkedin logo

LivePerson (NASDAQ: LPSN) is the global leader in enterprise conversations. Hundreds of the world’s leading brands — including HSBC, Chipotle, and Virgin Media — use our award-winning Conversational Cloud platform to connect with millions of consumers. We power nearly a billion conversational interactions every month, providing a uniquely rich data set and safety tools to unlock the power of Conversational AI for better customer experiences. At LivePerson, we foster an inclusive workplace culture that encourages meaningful connection, collaboration, and innovation. Everyone is invited to ask questions, actively seek new ways to achieve success, and reach their full potential. We are continually looking for ways to improve our products and make things better. This means spotting opportunities, solving ambiguities, and seeking effective solutions to the problems our customers care about. Overview We are looking for an experienced Data Engineer to provide data engineering expertise and support to various analytical products of LivePerson, and assist in migrating our existing data processing ecosystem from Hadoop (Spark, MapReduce, Java, and Scala) to Databricks on GCP. The goal is to leverage Databricks’ scalability, performance, and ease of use to enhance our current workflows. You Will Assessment and Planning: Review the existing Hadoop infrastructure, including Spark and MapReduce jobs. Analyze Java and Scala codebases for compatibility with Databricks. Identify dependencies, libraries, and configurations that may require modification. Propose a migration plan with clear timelines and milestones. Code Migration: Refactor Spark jobs to run efficiently on Databricks. Migrate MapReduce jobs where applicable or rewrite them using Spark DataFrame/Dataset API. Update Java and Scala code to comply with Databricks' runtime environment. Testing and Validation: Develop unit and integration tests to ensure parity between the existing and new systems. Compare performance metrics before and after migration. Implement error handling and logging consistent with best practices in Databricks. Optimization and Performance Tuning: Fine-tune Spark configurations for performance improvements on Databricks. Optimize data ingestion and transformation processes. Deployment and Documentation: Deploy migrated jobs to production in Databricks. Document changes, configurations, and processes thoroughly. Provide knowledge transfer to internal teams if required. Required Skills 6+ years of experience in Data Engineering with focus on building data pipelines, data platforms and ETL (Extract, transform, Load) processes on Hadoop and Databricks. Strong Expertise in Databricks (Spark on Databricks, Delta Lake, etc.) preferably on GCP. Strong expertise in the Hadoop ecosystem (Spark, MapReduce, HDFS) with solid foundations of Spark and its internals. Proficiency in Scala and Java. Strong SQL knowledge. Strong understanding of data engineering and optimization techniques. Solid understanding on Data governance, Data modeling and enterprise scale data lakehouse platform Experience with test frameworks like Great Expectations Minimum Qualifications Bachelor's degree in Computer Science or a related field Certified Databricks Engineer- Preferred You Should Be An Expert In Databricks with spark and its internals (3 years) - MUST Data engineering in Hadoop ecosystem (5 years) - MUST Scala and Java (5 years) - MUST SQL - MUST Benefits Health: Medical, Dental and Vision Time away: vacation and holidays Development: Access to internal professional development resources. Equal opportunity employer Why You’ll Love Working Here As leaders in enterprise customer conversations, we celebrate diversity, empowering our team to forge impactful conversations globally. LivePerson is a place where uniqueness is embraced, growth is constant, and everyone is empowered to create their own success. And, we're very proud to have earned recognition from Fast Company, Newsweek, and BuiltIn for being a top innovative, beloved, and remote-friendly workplace. Belonging At LivePerson We are proud to be an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to age, ancestry, color, family or medical care leave, gender identity or expression, genetic information, marital status, medical condition, national origin, physical or mental disability, protected veteran status, race, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable laws, regulations and ordinances. We also consider qualified applicants with criminal histories, consistent with applicable federal, state, and local law. We are committed to the accessibility needs of applicants and employees. We provide reasonable accommodations to job applicants with physical or mental disabilities. Applicants with a disability who require reasonable accommodation for any part of the application or hiring process should inform their recruiting contact upon initial connection. The talent acquisition team at LivePerson has recently been notified of a phishing scam targeting candidates applying for our open roles. Scammers have been posing as hiring managers and recruiters in an effort to access candidates' personal and financial information. This phishing scam is not isolated to only LivePerson and has been documented in news articles and media outlets. Please note that any communication from our hiring teams at LivePerson regarding a job opportunity will only be made by a LivePerson employee with an @ liveperson.com email address. LivePerson does not ask for personal or financial information as part of our interview process, including but not limited to your social security number, online account passwords, credit card numbers, passport information and other related banking information. If you have any questions and or concerns, please feel free to contact recruiting-lp@liveperson.com Show more Show less

Posted 1 week ago

Apply

0.0 - 5.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Indeed logo

This role is for one of our clients Industry: Technology, Information and Media Seniority level: Mid-Senior level Min Experience: 5 years Location: Bengaluru, India, Karnataka JobType: full-time We are seeking a Big Data Engineer with deep technical expertise to join our fast-paced, data-driven team. In this role, you will be responsible for designing and building robust, scalable, and high-performance data pipelines that fuel real-time analytics, business intelligence, and machine learning applications across the organization. If you thrive on working with large datasets, cutting-edge technologies, and solving complex data engineering challenges, this is the opportunity for you. What You’ll Do Design & Build Pipelines : Develop efficient, reliable, and scalable data pipelines that process large volumes of structured and unstructured data using big data tools. Distributed Data Processing : Leverage the Hadoop ecosystem (HDFS, Hive, MapReduce) to manage and transform massive datasets. Starburst (Trino) Integration : Design and optimize federated queries using Starburst, enabling seamless access across diverse data platforms. Databricks Lakehouse Development : Utilize Spark, Delta Lake, and MLflow on the Databricks Lakehouse Platform to enable unified analytics and AI workloads. Data Modeling & Architecture : Work with stakeholders to translate business requirements into flexible, scalable data models and architecture. Performance & Optimization : Monitor, troubleshoot, and fine-tune pipeline performance to ensure efficiency, reliability, and data integrity. Security & Compliance : Implement and enforce best practices for data privacy, security, and compliance with global regulations like GDPR and CCPA. Collaboration : Partner with data scientists, product teams, and business users to deliver impactful data solutions and improve decision-making. What You Bring Must-Have Skills 5+ years of hands-on experience in big data engineering, data platform development, or similar roles. Strong experience with Hadoop , including HDFS, Hive, HBase, and MapReduce. Deep understanding and practical use of Starburst (Trino) or Presto for large-scale querying. Hands-on experience with Databricks Lakehouse Platform , Spark, and Delta Lake. Proficient in SQL and programming languages like Python or Scala . Strong knowledge of data warehousing, ETL/ELT workflows, and schema design. Familiarity with CI/CD tools, version control (Git), and workflow orchestration tools (Airflow or similar). Nice-to-Have Skills Experience with cloud environments such as AWS , Azure , or GCP . Exposure to Docker , Kubernetes , or infrastructure-as-code tools. Understanding of data governance and metadata management platforms. Experience supporting AI/ML initiatives with curated datasets and pipelines.

Posted 1 week ago

Apply

7.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Position Overview Job Title - Data Platform Engineer - Tech Lead Location - Pune, India Role Description DB Technology is a global team of tech specialists, spread across multiple trading hubs and tech centers. We have a strong focus on promoting technical excellence – our engineers work at the forefront of financial services innovation using cutting-edge technologies. DB Pune location plays a prominent role in our global network of tech centers, it is well recognized for its engineering culture and strong drive to innovate. We are committed to building a diverse workforce and to creating excellent opportunities for talented engineers and technologists. Our tech teams and business units use agile ways of working to create best solutions for the financial markets. CB Data Services and Data Platform We are seeking an experienced Software Engineer with strong leadership skills to join our dynamic tech team. In this role, you will lead a group of engineers working on cutting-edge technologies in Hadoop, Big Data, GCP, Terraform, Big Query, Data Proc and data management. You will be responsible for overseeing the development of robust data pipelines, ensuring data quality, and implementing efficient data management solutions. Your leadership will be critical in driving innovation, ensuring high standards in data infrastructure, and mentoring team members. Your responsibilities will include working closely with data engineers, analysts, cross-functional teams, and other stakeholders to ensure that our data platform meets the needs of our organization and supports our data-driven initiatives. Join us in building and scaling our tech solutions including hybrid data platform to unlock new insights and drive business growth. If you are passionate about data engineering, we want to hear from you! Deutsche Bank’s Corporate Bank division is a leading provider of cash management, trade finance and securities finance. We complete green-field projects that deliver the best Corporate Bank - Securities Services products in the world. Our team is diverse, international, and driven by shared focus on clean code and valued delivery. At every level, agile minds are rewarded with competitive pay, support, and opportunities to excel. You will work as part of a cross-functional agile delivery team. You will bring an innovative approach to software development, focusing on using the latest technologies and practices, as part of a relentless focus on business value. You will be someone who sees engineering as team activity, with a predisposition to open code, open discussion and creating a supportive, collaborative environment. You will be ready to contribute to all stages of software delivery, from initial analysis right through to production support. What We’ll Offer You As part of our flexible scheme, here are just some of the benefits that you’ll enjoy Best in class leave policy Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your Key Responsibilities Technical Leadership: Lead a cross-functional team of engineers in the design, development, and implementation of on prem and cloud-based data solutions. Provide hands-on technical guidance and mentorship to team members, fostering a culture of continuous learning and improvement. Collaborate with product management and stakeholders to define technical requirements and establish delivery priorities. Architectural and Design Capabilities: Architect and implement scalable, efficient, and reliable data management solutions to support complex data workflows and analytics. Evaluate and recommend tools, technologies, and best practices to enhance the data platform. Drive the adoption of microservices, containerization, and serverless architectures within the team. Quality Assurance: Establish and enforce best practices in coding, testing, and deployment to maintain high-quality code standards. Oversee code reviews and provide constructive feedback to promote code quality and team growth. Your Skills And Experience Technical Skills: Bachelor's or Master’s degree in Computer Science, Engineering, or related field. 7+ years of experience in software engineering, with a focus on Big Data and GCP technologies such as Hadoop, PySpark, Terraform, BigQuery, DataProc and data management. Proven experience in leading software engineering teams, with a focus on mentorship, guidance, and team growth. Strong expertise in designing and implementing data pipelines, including ETL processes and real-time data processing. Hands-on experience with Hadoop ecosystem tools such as HDFS, MapReduce, Hive, Pig, and Spark. Hands on experience with cloud platform particularly Google Cloud Platform (GCP), and its data management services (e.g., Terraform, BigQuery, Cloud Dataflow, Cloud Dataproc, Cloud Storage). Solid understanding of data quality management and best practices for ensuring data integrity. Familiarity with containerization and orchestration tools such as Docker and Kubernetes is a plus. Excellent problem-solving skills and the ability to troubleshoot complex systems. Strong communication skills and the ability to collaborate with both technical and non-technical stakeholders Leadership Abilities: Proven experience in leading technical teams, with a track record of delivering complex projects on time and within scope. Ability to inspire and motivate team members, promoting a collaborative and innovative work environment. Strong problem-solving skills and the ability to make data-driven decisions under pressure. Excellent communication and collaboration skills. Proactive mindset, attention to details, and constant desire to improve and innovate. How We’ll Support You Training and development to help you excel in your career Coaching and support from experts in your team A culture of continuous learning to aid progression A range of flexible benefits that you can tailor to suit your needs About Us And Our Teams Please visit our company website for further information: https://www.db.com/company/company.htm We strive for a culture in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively. Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment. Show more Show less

Posted 1 week ago

Apply

6.0 - 10.0 years

0 Lacs

Delhi, India

On-site

Linkedin logo

Description Join GlobalLogic, to be a valid part of the team working on a huge software project for the world-class company providing M2M / IoT 4G/5G modules e.g. to the automotive, healthcare and logistics industries. Through our engagement, we contribute to our customer in developing the end-user modules’ firmware, implementing new features, maintaining compatibility with the newest telecommunication and industry standards, as well as performing analysis and estimations of the customer requirements. Requirements BA / BS degree in Computer Science, Mathematics or related technical field, or equivalent practical experience. Experience in Cloud SQL and Cloud Bigtable Experience in Dataflow, BigQuery, Dataproc, Datalab, Dataprep, Pub / Sub and Genomics Experience in Google Transfer Appliance, Cloud Storage Transfer Service, BigQuery Data Transfer Experience with data processing software (such as Hadoop, Kafka, Spark, Pig, Hive) and with data processing algorithms (MapReduce, Flume). Experience working with technical customers. Experience in writing software in one or more languages such as Java, Python 6-10 years of relevant consulting, industry or technology experience Strong problem solving and troubleshooting skills Strong communicator Job responsibilities Experience working data warehouses, including data warehouse technical architectures, infrastructure components, ETL / ELT and reporting / analytic tools and environments. Experience in technical consulting. Experience architecting, developing software, or internet scale production-grade Big Data solutions in virtualized environments such as Google Cloud Platform (mandatory) and AWS / Azure(good to have) Experience working with big data, information retrieval, data mining or machine learning as well as experience in building multi-tier high availability applications with modern web technologies (such as NoSQL, Kafka,NPL, MongoDB, SparkML, Tensorflow). Working knowledge of ITIL and / or agile methodologies Google Data Engineer certified What we offer Culture of caring. At GlobalLogic, we prioritize a culture of caring. Across every region and department, at every level, we consistently put people first. From day one, you’ll experience an inclusive culture of acceptance and belonging, where you’ll have the chance to build meaningful connections with collaborative teammates, supportive managers, and compassionate leaders. Learning and development. We are committed to your continuous learning and development. You’ll learn and grow daily in an environment with many opportunities to try new things, sharpen your skills, and advance your career at GlobalLogic. With our Career Navigator tool as just one example, GlobalLogic offers a rich array of programs, training curricula, and hands-on opportunities to grow personally and professionally. Interesting & meaningful work. GlobalLogic is known for engineering impact for and with clients around the world. As part of our team, you’ll have the chance to work on projects that matter. Each is a unique opportunity to engage your curiosity and creative problem-solving skills as you help clients reimagine what’s possible and bring new solutions to market. In the process, you’ll have the privilege of working on some of the most cutting-edge and impactful solutions shaping the world today. Balance and flexibility. We believe in the importance of balance and flexibility. With many functional career areas, roles, and work arrangements, you can explore ways of achieving the perfect balance between your work and life. Your life extends beyond the office, and we always do our best to help you integrate and balance the best of work and life, having fun along the way! High-trust organization. We are a high-trust organization where integrity is key. By joining GlobalLogic, you’re placing your trust in a safe, reliable, and ethical global company. Integrity and trust are a cornerstone of our value proposition to our employees and clients. You will find truthfulness, candor, and integrity in everything we do. About GlobalLogic GlobalLogic, a Hitachi Group Company, is a trusted digital engineering partner to the world’s largest and most forward-thinking companies. Since 2000, we’ve been at the forefront of the digital revolution – helping create some of the most innovative and widely used digital products and experiences. Today we continue to collaborate with clients in transforming businesses and redefining industries through intelligent products, platforms, and services. Show more Show less

Posted 1 week ago

Apply

8.0 - 12.0 years

30 - 35 Lacs

Chennai

Work from Office

Naukri logo

8-12 years experience in a statistical and/or data science role Proven experience as a Data Scientist or similar role Strong organizational and leadership skills Degree in Computer Science, Data Science, Mathematics, or similar field Deep knowledge of machine learning, statistics, optimization or related field Experience in Linear and non-Linear Regression models including Multiple and Multivariate Experience in Classification models including Ensemble techniques Experience in Unsupervised models including K-Means, DBSCAN, LOF Rich Experience in NLP techniques Experience in Deep Learning techniques like CNN, RNN, etc Strong real-time experience with atleast one of the machine language softwares like R, Python, Matlab is mandatory Experience working with large data sets, simulation/ optimization and distributed computing tools (Map/Reduce, Hadoop, Hive, Spark, etc.) Excellent written and verbal communication skills along with strong desire to work in cross functional teams Attitude to thrive in a fun, fast-paced start-up like environment Additional advantage of having experience in semi, unstructured databases like MongoDB, Cassandra, ArangoDB, Couchbase, GraphDB, etc Optional: programming in C, C++, Java, .Net.

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies