Home
Jobs

3886 Hadoop Jobs - Page 23

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 8.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Job purpose: Need to work as a Senior Technology Consultant in FinCrime solutions modernisation and transformation projects. Should exhibit deep experience in FinCrime solutions during the client discussions and be able to convince the client about the solution. Lead and manage a team of technology consultants to be able to deliver large technology programs in the capacity of project manager. Work Experience Requirements Understand high-level business requirements and relate them to appropriate AML / FinCrime product capabilities. Define and validate customisation needs for AML products as per client requirements. Review client processes and workflows and make recommendations to the client to maximise benefits from the AML Product. Show in-depth knowledge on best banking practices and AML product modules. Prior experience in one of more COTS such as Norkom, Actimize, NetReveal, SAS AML VI/VIA, fircosoft or Quantexa Your client responsibilities: Need to work as a Technical Business Systems Analyst in one or more FinCrime projects. Interface and communicate with the onsite coordinators. Completion of assigned tasks on time and regular status reporting to the lead Regular status reporting to the Manager and onsite coordinators Interface with the customer representatives as and when needed. Willing to travel to the customers locations on need basis. Mandatory skills: Technical: Application and Solution (workflow, interface) technical design Business requirements, definition, analysis, and mapping SQL and Understanding of Bigdata tech such as Spark, Hadoop, or Elasticsearch Scripting/ Programming: At least one programming/scripting language amongst Python, Java or Unix Shell Script Hands of prior experience on NetReveal modules development Experience in product migration, implementation - preferably been part of at least 1 AML implementations. Experience in Cloud and CI/CD (Devops Automation environment) Should Posses high-level understanding of infrastructure designs, data model and application/business architecture. Act as the Subject Matter Expert (SME) and possess an excellent functional/operational knowledge of the activities performed by the various teams. Functional : Thorough knowledge of the KYC process Thorough knowledge on Transaction monitoring and scenarios Should have developed one or more modules worked on KYC - know your customer, CDD- customer due diligence, EDD - enhanced due diligence, sanction screening, PEP - politically exposed person, adverse media screening, TM- transaction monitoring, CM- Case Management. Thorough knowledge of case management workflows Experience in requirements gathering, documentation and gap analysis in OOTB (out of the box) vs custom features. Agile (Scrum or Kanban) Methodology Exposure in conducting or participating in product demonstration, training, and assessment studies. Analytical thinking in finding out of the box solutions with an ability to provide customization approach and configuration mapping. Excellent client-facing skills Should be able to review the test cases and guide the testing team on need basis. End to End product implementation and transformation experience is desirable. Education And Experience – Mandatory MBA/ MCA/ BE/ BTech or equivalent with banking industry experience of 3 to 8 years EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 4 days ago

Apply

3.0 - 8.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Job purpose: Need to work as a Senior Technology Consultant in FinCrime solutions modernisation and transformation projects. Should exhibit deep experience in FinCrime solutions during the client discussions and be able to convince the client about the solution. Lead and manage a team of technology consultants to be able to deliver large technology programs in the capacity of project manager. Work Experience Requirements Understand high-level business requirements and relate them to appropriate AML / FinCrime product capabilities. Define and validate customisation needs for AML products as per client requirements. Review client processes and workflows and make recommendations to the client to maximise benefits from the AML Product. Show in-depth knowledge on best banking practices and AML product modules. Prior experience in one of more COTS such as Norkom, Actimize, NetReveal, SAS AML VI/VIA, fircosoft or Quantexa Your client responsibilities: Need to work as a Technical Business Systems Analyst in one or more FinCrime projects. Interface and communicate with the onsite coordinators. Completion of assigned tasks on time and regular status reporting to the lead Regular status reporting to the Manager and onsite coordinators Interface with the customer representatives as and when needed. Willing to travel to the customers locations on need basis. Mandatory skills: Technical: Application and Solution (workflow, interface) technical design Business requirements, definition, analysis, and mapping SQL and Understanding of Bigdata tech such as Spark, Hadoop, or Elasticsearch Scripting/ Programming: At least one programming/scripting language amongst Python, Java or Unix Shell Script Hands of prior experience on NetReveal modules development Experience in product migration, implementation - preferably been part of at least 1 AML implementations. Experience in Cloud and CI/CD (Devops Automation environment) Should Posses high-level understanding of infrastructure designs, data model and application/business architecture. Act as the Subject Matter Expert (SME) and possess an excellent functional/operational knowledge of the activities performed by the various teams. Functional : Thorough knowledge of the KYC process Thorough knowledge on Transaction monitoring and scenarios Should have developed one or more modules worked on KYC - know your customer, CDD- customer due diligence, EDD - enhanced due diligence, sanction screening, PEP - politically exposed person, adverse media screening, TM- transaction monitoring, CM- Case Management. Thorough knowledge of case management workflows Experience in requirements gathering, documentation and gap analysis in OOTB (out of the box) vs custom features. Agile (Scrum or Kanban) Methodology Exposure in conducting or participating in product demonstration, training, and assessment studies. Analytical thinking in finding out of the box solutions with an ability to provide customization approach and configuration mapping. Excellent client-facing skills Should be able to review the test cases and guide the testing team on need basis. End to End product implementation and transformation experience is desirable. Education And Experience – Mandatory MBA/ MCA/ BE/ BTech or equivalent with banking industry experience of 3 to 8 years EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 4 days ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Job Responsibilities: We are seeking an experienced Data Scientist to lead the development of a Data Science program . You will work closely with various stakeholders to derive deep industry knowledge across paper, water, leather, and performance chemical industries . You will help develop a data strategy for the company including collection of the right data, creation of the data science project portfolio, partnering with external providers , and augmenting capabilities with additional internal hires. A large part of the job is communicating and developing relationships with key stakeholders and subject matter experts to tee up proofs of concept (PoC) projects to demonstrate how data science can solve old problems in unique and novel ways . You will not have a large internal team to rely on, at least initially, so individual expertise, breadth of data science knowledge , and ability to partner with external companies will be essential for success. In addition to the pure data science problems, you will be working closely with a multi-disciplinary team consisting of sensor scientists, software engineers, network engineers, mechanical/electrical engineers, and chemical engineers in the development and deployment of IoT solutions . Basic Qualification: Bachelor’s degree in a quantitative field such as Data Science, Statistics, Applied Mathematics, Physics, Engineering, or Computer Science 5+ years of relevant working experience in an analytical role involving data extraction, analysis, and visualization and expertise in the following areas: Expertise in one or more programming languages : R, Python, MATLAB, JMP, Minitab, Java, C++, Scala Key libraries such as Sklearn, XGBoost, GLMNet, Dplyr, ggplot, RShiny Experience and knowledge of data mining algorithms including supervised and unsupervised machine learning techniques in areas such as Gradient Boosting, Decision Trees, Multivariate Regressions, Logistic Regression, Neural Network, Random Forest, SVM, Naive Bayes, Time Series, Optimization Microsoft IoT/data science toolkit : Azure Machine Learning, Datalake, Datalake Analytics, Workbench, IoT Hub, Stream Analytics, CosmosDB, Time Series Insights, Power BI Data querying languages : SQL, Hadoop/Hive A demonstrated record of success with a verifiable portfolio of problems tackled Preferred Qualifications: Master’s or PhD degree in a quantitative field such as Data Science, Statistics, Applied Mathematics, Physics, Engineering, or Computer Science Experience in the specialty chemicals sector or similar industry Background in engineering, especially Chemical Engineering Experience starting up a data science program Experience working with global stakeholders Experience working in a start-up environment, preferably in an IoT company Knowledge in quantitative modeling tools and statistical analysis Personality Traits: A strong business focus, ownership, and inner self-drive to develop data science solutions to real-world customers with tangible impact. Ability to collaborate effectively with multi-disciplinary and passionate team members . Ability to communicate with a diverse set of stakeholders . Strong planning and organization skills , with the ability to manage multiple complex projects . A life-long learner who constantly updates skills. Show more Show less

Posted 4 days ago

Apply

0.6 - 1.6 years

0 Lacs

Bengaluru East, Karnataka, India

On-site

Linkedin logo

Visa is a world leader in payments and technology, with over 259 billion payments transactions flowing safely between consumers, merchants, financial institutions, and government entities in more than 200 countries and territories each year. Our mission is to connect the world through the most innovative, convenient, reliable, and secure payments network, enabling individuals, businesses, and economies to thrive while driven by a common purpose – to uplift everyone, everywhere by being the best way to pay and be paid. Make an impact with a purpose-driven industry leader. Join us today and experience Life at Visa. Job Description Work collaboratively with Data Analyst, Data Scientists Software Engineers and cross-functional partners to design and deploy data pipelines to deliver analytical solution. Responsible for building data pipelines, data model, data marts, data warehouse including OLAP cube in multidimensional data model with proficiency / conceptual understanding of PySpark and SQL scripting. Responsible for the design, development, testing, implementation and support functional semantic data marts using various modeling techniques from underlying data stores/data warehouse and facilitate Business Intelligence Data Solutions Experience in building reports, dashboards, scorecards & visualization using Tableau/ Power BI and other data analysis techniques to collect, explore, and extract insights from structured and unstructured data. Responsible for AI/ML model Utilizing machine learning, statistical methods, data mining, forecasting and predictive modeling techniques. Following Dev Ops Model, Agile implementation, CICD method of deployment & JIRA creation / management for projects. Define and build technical/data documentation and experience with code version control systems (for e.g., git). Assist owner with periodic evaluation of next generation & modernization of platform. Exhibit Leadership Principles such as Accountability & Ownership of High Standards: Given the criticality & sensitivity of data . Customer Focus : Going Above & Beyond in finding innovative solution and product to best serve the business needs and there-by Visa. This is a hybrid position. Expectation of days in office will be confirmed by your hiring manager. Qualifications Basic Qualifications • Bachelors degree or •0.6-1.6 years of work experience with a Bachelor’s Degree or Master's Degree in computer / information science with relevant work experience in IT industry •Enthusiastic, energetic and self-learning candidates with loads of curiosity and flexibility. •Proven hands-on capability in the development of data pipelines and data engineering. •Experience in creating data-driven business solutions and solving data problems using technologies such as Hadoop, Hive, and Spark. •Ability to program in one or more scripting languages such as Python and one or more programming languages such as Java or Scala. •Familiarity with AI-centric libraries like TensorFlow, PyTorch, and Keras. •Familiarity with machine learning algorithms and statistical models is beneficial. •Critical ability to interpret complex data and provide actionable insights. This encompasses statistical analysis, predictive modeling, and data visualization. •Extended experience in Agile Release Management practices, governance, and planning. •Strong leadership skills with demonstrated ability to lead global, cross-functional teams. Additional Information Visa is an EEO Employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability or protected veteran status. Visa will also consider for employment qualified applicants with criminal histories in a manner consistent with EEOC guidelines and applicable local law. Show more Show less

Posted 4 days ago

Apply

8.0 - 10.0 years

0 Lacs

Delhi, India

On-site

Linkedin logo

Location : Bengaluru / Delhi Reports To : Chief Revenue Officer Position Overview: We are looking for a highly motivated Pre-Sales Specialist to join our team at Neysa, a rapidly growing AI Cloud Platform company that's making waves in the industry. The role is a customer-facing technical position that will work closely with sales teams to understand client requirements, design tailored solutions and drive technical engagements. You will be responsible for presenting complex technology solutions to customers, creating compelling demonstrations, and assisting in the successful conversion of sales opportunities. Key Responsibilities: Solution Design & Customization : Work closely with customers to understand their business challenges and technical requirements. Design and propose customized solutions leveraging Cloud, Network, AI, and Machine Learning technologies that best fit their needs. Sales Support & Enablement : Collaborate with the sales team to provide technical support during the sales process, including delivering presentations, conducting technical demonstrations, and assisting in the development of proposals and RFP responses. Customer Engagement : Engage with prospects and customers throughout the sales cycle, providing technical expertise and acting as the technical liaison between the customer and the company. Conduct deep-dive discussions and workshops to uncover technical requirements and offer viable solutions. Proof of Concept (PoC) : Lead the technical aspects of PoC engagements, demonstrating the capabilities and benefits of the proposed solutions. Collaborate with the customer to validate the solution, ensuring it aligns with their expectations. Product Demos & Presentations : Deliver compelling product demos and presentations tailored to the customer’s business and technical needs, helping organizations unlock innovation and growth through AI. Simplify complex technical concepts to ensure that both business and technical stakeholders understand the value proposition. Proposal Development & RFPs : Assist in crafting technical proposals, responding to RFPs (Request for Proposals), and providing technical content that highlights the company’s offerings, differentiators, and technical value. Technical Workshops & Trainings : Facilitate customer workshops and training sessions to enable customers to understand the architecture, functionality, and capabilities of the solutions offered. Collaboration with Product & Engineering Teams : Provide feedback to product management and engineering teams based on customer interactions and market demands. Help shape future product offerings and improvements. Market & Competitive Analysis : Stay up-to-date on industry trends, new technologies, and competitor offerings in AI and Machine Learning, Cloud and Networking, to provide strategic insights to sales and product teams. Documentation & Reporting : Create and maintain technical documentation, including solution designs, architecture diagrams, and deployment plans. Track and report on pre-sales activities, including customer interactions, pipeline status, and PoC results. Key Skills and Qualifications: Experience : Minimum of 8-10 years of experience in a pre-sales or technical sales role, with a focus on AI, Cloud and Networking solutions. Technical Expertise : Solid understanding of Cloud computing, Data Center infrastructure, Networking (SDN, SD-WAN, VPNs), and emerging AI/ML technologies. Experience with architecture design and solutioning across these domains, especially in hybrid cloud and multi-cloud environments. Familiarity with tools such as Kubernetes, Docker, TensorFlow, Apache Hadoop, and machine learning frameworks. Sales Collaboration : Ability to work alongside sales teams, providing the technical expertise needed to close complex deals. Experience in delivering customer-focused presentations and demos. Presentation & Communication Skills : Exceptional ability to articulate technical solutions to both technical and non-technical stakeholders. Strong verbal and written communication skills. Customer-Focused Mindset : Excellent customer service skills with a consultative approach to solving customer problems. Ability to understand business challenges and align technical solutions accordingly. Having the mindset to build rapport with customers and become their trusted advisor. Problem-Solving & Creativity : Strong analytical and problem-solving skills, with the ability to design creative, practical solutions that align with customer needs. Certifications : Degree in Computer Science, Engineering, or a related field Cloud and AI / ML certifications are highly desirable Team Player : Ability to work collaboratively with cross-functional teams including product, engineering, and delivery teams. Preferred Qualifications: Industry Experience : Experience in delivering solutions in industries such as finance, healthcare, or telecommunications is a plus. Technical Expertise in AI/ML : A deeper understanding of AI/ML applications, including natural language processing (NLP), computer vision, predictive analytics, or data science use cases. Experience with DevOps Tools : Familiarity with CI/CD pipelines, infrastructure as code (IaC), and automation tools like Terraform, Ansible, or Jenkins. Show more Show less

Posted 4 days ago

Apply

3.0 - 7.0 years

5 - 8 Lacs

Mumbai

Work from Office

Naukri logo

Position Summary : At NCR Atleos, our Internal Audit Department (IAD) purpose is to help enable competent and informed decisions to add value and improve operations, while contributing meaningfully to Board and organizational confidence. We are indispensable business partners, with a brand focused on insight, impact and excellence. We believe that everything we do is to enhance value, provide insights, and instill confidence. To do this, we must be relevant, connected, flexible, and courageous. NCR Atleos IAD is seeking a Data Analytics Manager who will play a critical role in enhancing the Internal Audit function through data-driven insights, analytics, and process optimization. This role will report directly to the Executive Director, Internal Audit. Key Areas of Responsibility: Data Analytics Strategy & Execution: Develop and implement data analytics methodologies to support the internal audit function; Design and execute advanced data analysis scripts and models to identify trends, anomalies, and potential risks; Partner with the audit teams to integrate analytics into audit planning, execution, and reporting. Audit Support : Collaborate with the Director of Internal Audit to support audits in the areas of technology, information security, business processes, and financial operations; Extract, analyze, and interpret data from various enterprise systems to support audit objectives; Provide insights that enhance audit outcomes and help identify areas for operational improvement. Data Visualization & Reporting: Create clear, actionable, and visually compelling reports and dashboards to communicate audit findings to stakeholders and the Audit Committee; Develop templates and standards for data analytics in audit work products to ensure consistency and clarity. Collaboration & Training: Work closely with IT, Finance, Operations, and other business units to gather data and validate insights; Mentor and train other Internal Audit team members on leveraging data analytics tools and techniques; Build partnerships across the organization to foster a culture of data-driven decision-making. Technology & Tools: Identify, evaluate, and implement data analytics tools and technologies to improve audit processes; Stay updated on emerging technologies and trends in data analytics and audit methodologies; Support automation initiatives to enhance efficiency within the Internal Audit department. Compliance & Risk Management: Ensure data analytics initiatives align with organizational compliance requirements and internal audit standards; Monitor and evaluate data integrity, system reliability, and process controls across business units. Continuous Improvement: Stay abreast of emerging technologies, audit methodologies, and regulatory changes. Support the Executive Director in overseeing the use of technology within the audit function, including data analytics and audit management software, to enhance audit quality and efficiency. Contribute to innovation and improvements to the IT audit process, controls and the overall Internal Audit Department. Qualifications: Education : Bachelors or masters in computer science, IT, Engineering, Data Science, Econometrics, or related fields. Experience : Proven data analytics experience in internal audit or risk management, with strong analytical, problem-solving, and project management skills. Statistical Methods : Proficient in regressions, time series, clusters, and decision trees. Programming : Skilled in JavaScript, Python, R, PHP, .NET, SQL. Databases : Expertise in relational databases, data warehouses, ETL, UI tools, and query optimization. Visualization : Proficient in Tableau, Power BI, and advanced MS Office skills. Cloud Platforms : Experience with Microsoft Azure, Data Bricks, Hadoop, or similar platforms. Project Management : Experience managing analytics projects and stakeholder management. Communication : Ability to convey complex data insights to non-technical stakeholders. Leadership : Demonstrated leadership and team mentoring skills. Cultural Sensitivity : Ability to work effectively in a global environment. Languages : Proficiency in multiple languages is an advantage. Ethics : High ethical standards and commitment to audit integrity. Confidentiality : Ensuring the security of sensitive data. Team Environment : Positive attitude within a dynamic team setting.

Posted 4 days ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

LEAD DATA ENGINEER Location: Hyderabad Role: Permanent Mode: WFO JOB RESPONSIBILITIES: Tracks the various Machine learning projects and their data needs. Tracks and improves Kanban process of product maintenance Drives complex technical discussions both within company and outside data partners Actively Contributes to the design of machine learning solutions by having a deep understanding of how the data is used and how new sources of data can be introduced Advocates for investments in tools and technologies to streamline data workflows and reduce technical debt Continuously explores and adopts emerging technologies and methodologies in data engineering and machine learning Develops and maintains scalable data pipelines to support machine learning models and analytics Collaborates with data scientists to ensure efficient data processing and model deployment Ensures data quality, integrity, and security across all stages of the data pipeline Implements monitoring and alerting systems to detect anomalies in data processing and model performance Enhances data versioning, data lineage, and reproducibility practices to improve model transparency and auditing . QUALIFICATION 5+ years of experience in data engineering or related fields, with a strong focus on building scalable data pipelines to support machine learning workflows. Bachelor’s or Master’s degree in Computer Science, Engineering, Mathematics, or other relevant fields. Specific experience in Kafka needed . Snowflake and data bricks would be huge plus. Proven expertise in designing, implementing, and maintaining large-scale, high-performance data architectures and ETL processes managing 1TB a day. Strong knowledge of database management systems (SQL and NoSQL), distributed data processing (e.g., Hadoop, Spark), and cloud platforms (AWS, GCP, Azure). Experience working closely with data scientists and machine learning engineers to optimize data flows for model training and real-time inference with latency requirements. Hands-on experience with data wrangling, data preprocessing, and feature engineering to ensure clean, high-quality data for machine learning models. Solid understanding of data governance, security protocols, and compliance requirements (e.g., GDPR, HIPAA) to ensure data privacy and integrity. Preferred Experience in data pipelines and analytics for video-game development Experience in Advertising industry Experience in online businesses where transactions happen without human intervention. Show more Show less

Posted 4 days ago

Apply

5.0 - 8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job description: Job Description Role Purpose The purpose of the role is to support process delivery by ensuring daily performance of the Production Specialists, resolve technical escalations and develop technical capability within the Production Specialists. ͏ Do Oversee and support process by reviewing daily transactions on performance parameters Review performance dashboard and the scores for the team Support the team in improving performance parameters by providing technical support and process guidance Record, track, and document all queries received, problem-solving steps taken and total successful and unsuccessful resolutions Ensure standard processes and procedures are followed to resolve all client queries Resolve client queries as per the SLA’s defined in the contract Develop understanding of process/ product for the team members to facilitate better client interaction and troubleshooting Document and analyze call logs to spot most occurring trends to prevent future problems Identify red flags and escalate serious client issues to Team leader in cases of untimely resolution Ensure all product information and disclosures are given to clients before and after the call/email requests Avoids legal challenges by monitoring compliance with service agreements ͏ Handle technical escalations through effective diagnosis and troubleshooting of client queries Manage and resolve technical roadblocks/ escalations as per SLA and quality requirements If unable to resolve the issues, timely escalate the issues to TA & SES Provide product support and resolution to clients by performing a question diagnosis while guiding users through step-by-step solutions Troubleshoot all client queries in a user-friendly, courteous and professional manner Offer alternative solutions to clients (where appropriate) with the objective of retaining customers’ and clients’ business Organize ideas and effectively communicate oral messages appropriate to listeners and situations Follow up and make scheduled call backs to customers to record feedback and ensure compliance to contract SLA’s ͏ Build people capability to ensure operational excellence and maintain superior customer service levels of the existing account/client Mentor and guide Production Specialists on improving technical knowledge Collate trainings to be conducted as triage to bridge the skill gaps identified through interviews with the Production Specialist Develop and conduct trainings (Triages) within products for production specialist as per target Inform client about the triages being conducted Undertake product trainings to stay current with product features, changes and updates Enroll in product specific and any other trainings per client requirements/recommendations Identify and document most common problems and recommend appropriate resolutions to the team Update job knowledge by participating in self learning opportunities and maintaining personal networks ͏ Deliver NoPerformance ParameterMeasure1ProcessNo. of cases resolved per day, compliance to process and quality standards, meeting process level SLAs, Pulse score, Customer feedback, NSAT/ ESAT2Team ManagementProductivity, efficiency, absenteeism3Capability developmentTriages completed, Technical Test performance Mandatory Skills: Hadoop . Experience: 5-8 Years . Reinvent your world. We are building a modern Wipro. We are an end-to-end digital transformation partner with the boldest ambitions. To realize them, we need people inspired by reinvention. Of yourself, your career, and your skills. We want to see the constant evolution of our business and our industry. It has always been in our DNA - as the world around us changes, so do we. Join a business powered by purpose and a place that empowers you to design your own reinvention. Come to Wipro. Realize your ambitions. Applications from people with disabilities are explicitly welcome. Show more Show less

Posted 4 days ago

Apply

7.0 years

0 Lacs

India

Remote

Linkedin logo

About Lemongrass Lemongrass is a software-enabled services provider, synonymous with SAP on Cloud, focused on delivering superior, highly automated Managed Services to Enterprise customers. Our customers span multiple verticals and geographies across the Americas, EMEA and APAC. We partner with AWS, SAP, Microsoft and other global technology leaders. We are seeking an experienced Cloud Data Engineer with a strong background in AWS, Azure, and GCP. The ideal candidate will have extensive experience with cloud-native ETL tools such as AWS DMS, AWS Glue, Kafka, Azure Data Factory, GCP Dataflow, and other ETL tools like Informatica, SAP Data Intelligence, etc. You will be responsible for designing, implementing, and maintaining robust data pipelines and building scalable data lakes. Experience with various data platforms like Redshift, Snowflake, Databricks, Synapse, Snowflake and others is essential. Familiarity with data extraction from SAP or ERP systems is a plus. Key Responsibilities: Design and Development: Design, develop, and maintain scalable ETL pipelines using cloud-native tools (AWS DMS, AWS Glue, Kafka, Azure Data Factory, GCP Dataflow, etc.). Architect and implement data lakes and data warehouses on cloud platforms (AWS, Azure, GCP). Develop and optimize data ingestion, transformation, and loading processes using Databricks, Snowflake, Redshift, BigQuery and Azure Synapse. Implement ETL processes using tools like Informatica, SAP Data Intelligence, and others. Develop and optimize data processing jobs using Spark Scala. Data Integration and Management: Integrate various data sources, including relational databases, APIs, unstructured data, and ERP systems into the data lake. Ensure data quality and integrity through rigorous testing and validation. Perform data extraction from SAP or ERP systems when necessary. Performance Optimization: Monitor and optimize the performance of data pipelines and ETL processes. Implement best practices for data management, including data governance, security, and compliance. Collaboration and Communication: Work closely with data scientists, analysts, and other stakeholders to understand data requirements and deliver solutions. Collaborate with cross-functional teams to design and implement data solutions that meet business needs. Documentation and Maintenance: Document technical solutions, processes, and workflows. Maintain and troubleshoot existing ETL pipelines and data integrations. Qualifications Education: Bachelor’s degree in Computer Science, Information Technology, or a related field. Advanced degrees are a plus. Experience: 7+ years of experience as a Data Engineer or in a similar role. Proven experience with cloud platforms: AWS, Azure, and GCP. Hands-on experience with cloud-native ETL tools such as AWS DMS, AWS Glue, Kafka, Azure Data Factory, GCP Dataflow, etc. Experience with other ETL tools like Informatica, SAP Data Intelligence, etc. Experience in building and managing data lakes and data warehouses. Proficiency with data platforms like Redshift, Snowflake, BigQuery, Databricks, and Azure Synapse. Experience with data extraction from SAP or ERP systems is a plus. Strong experience with Spark and Scala for data processing. Skills: Strong programming skills in Python, Java, or Scala. Proficient in SQL and query optimization techniques. Familiarity with data modeling, ETL/ELT processes, and data warehousing concepts. Knowledge of data governance, security, and compliance best practices. Excellent problem-solving and analytical skills. Strong communication and collaboration skills. Preferred Qualifications: Experience with other data tools and technologies such as Apache Spark, or Hadoop. Certifications in cloud platforms (AWS Certified Data Analytics – Specialty, Google Professional Data Engineer, Microsoft Certified: Azure Data Engineer Associate). Experience with CI/CD pipelines and DevOps practices for data engineering Selected applicant will be subject to a background investigation, which will be conducted and the results of which will be used in compliance with applicable law. What we offer in return: Remote Working: Lemongrass always has been and always will offer 100% remote work Flexibility: Work where and when you like most of the time Training: A subscription to A Cloud Guru and generous budget for taking certifications and other resources you’ll find helpful State of the art tech: An opportunity to learn and run the latest industry standard tools Team: Colleagues who will challenge you giving the chance to learn from them and them from you Lemongrass Consulting is proud to be an Equal Opportunity and Affirmative Action employer. We do not discriminate on the basis of race, religion, color, national origin, religious creed, gender, sexual orientation, gender identity, gender expression, age, genetic information, status as a protected veteran, status as an individual with a disability, or other applicable legally protected characteristics Show more Show less

Posted 4 days ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Eviden, part of the Atos Group, with an annual revenue of circa € 5 billion is a global leader in data-driven, trusted and sustainable digital transformation. As a next generation digital business with worldwide leading positions in digital, cloud, data, advanced computing and security, it brings deep expertise for all industries in more than 47 countries. By uniting unique high-end technologies across the full digital continuum with 47,000 world-class talents, Eviden expands the possibilities of data and technology, now and for generations to come. Role Overview The Senior Tech Lead - AWS Data Engineering leads the design, development and optimization of data solutions on the AWS platform. The jobholder has a strong background in data engineering, cloud architecture, and team leadership, with a proven ability to deliver scalable and secure data systems. Responsibilities Lead the design and implementation of AWS-based data architectures and pipelines. Architect and optimize data solutions using AWS services such as S3, Redshift, Glue, EMR, and Lambda. Provide technical leadership and mentorship to a team of data engineers. Collaborate with stakeholders to define project requirements and ensure alignment with business goals. Ensure best practices in data security, governance, and compliance. Troubleshoot and resolve complex technical issues in AWS data environments. Stay updated on the latest AWS technologies and industry trends. Key Technical Skills & Responsibilities Overall 10+Yrs of Experience in IT Minimum 5-7 years in design and development of cloud data platforms using AWS services Must have experience of design and development of data lake / data warehouse / data analytics solutions using AWS services like S3, Lake Formation, Glue, Athena, EMR, Lambda, Redshift Must be aware about the AWS access control and data security features like VPC, IAM, Security Groups, KMS etc Must be good with Python and PySpark for data pipeline building. Must have data modeling including S3 data organization experience Must have an understanding of hadoop components, No SQL database, graph database and time series database; and AWS services available for those technologies Must have experience of working with structured, semi-structured and unstructured data Must have experience of streaming data collection and processing. Kafka experience is preferred. Experience of migrating data warehouse / big data application to AWS is preferred . Must be able to use Gen AI services (like Amazon Q) for productivity gain Eligibility Criteria Bachelor’s degree in Computer Science, Data Engineering, or a related field. Extensive experience with AWS data services and tools. AWS certification (e.g., AWS Certified Data Analytics - Specialty). Experience with machine learning and AI integration in AWS environments. Strong understanding of data modeling, ETL/ELT processes, and cloud integration. Proven leadership experience in managing technical teams. Excellent problem-solving and communication skills. Our Offering Global cutting-edge IT projects that shape the future of digital and have a positive impact on environment. Wellbeing programs & work-life balance - integration and passion sharing events. Attractive Salary and Company Initiative Benefits Courses and conferences Attractive Salary Hybrid work culture Let’s grow together. Show more Show less

Posted 4 days ago

Apply

2.0 - 4.0 years

6 - 10 Lacs

Pune

Hybrid

Naukri logo

So, what’s the role all about? We are looking for a highly driven and technically skilled Software Engineer to lead the integration of various Content Management Systems with AWS Knowledge Hub, enabling advanced Retrieval-Augmented Generation (RAG) search across heterogeneous customer data—without requiring data duplication. This role will also be responsible for expanding the scope of Knowledge Hub to support non-traditional knowledge items and enhance customer self-service capabilities. You will work at the intersection of AI, search infrastructure, and developer experience to make enterprise knowledge instantly accessible, actionable, and AI-ready. How will you make an impact? Integrate CMS with AWS Knowledge Hub to allow seamless RAG-based search across diverse data types—eliminating the need to copy data into Knowledge Hub instances. Extend Knowledge Hub capabilities to ingest and index non-knowledge assets, including structured data, documents, tickets, logs, and other enterprise sources. Build secure, scalable connectors to read directly from customer-maintained indices and data repositories. Enable self-service capabilities for customers to manage content sources using App Flow, Tray.ai, configure ingestion rules, and set up search parameters independently. Collaborate with the NLP/AI team to optimize relevance and performance for RAG search pipelines. Work closely with product and UX teams to design intuitive, powerful experiences around self-service data onboarding and search configuration. Implement data governance, access control, and observability features to ensure enterprise readiness. Have you got what it takes? Proven experience with search infrastructure, RAG pipelines, and LLM-based applications. 2+ Years’ hands-on experience with AWS Knowledge Hub, AppFlow, Tray.ai, or equivalent cloud-based indexing/search platforms. Strong backend development skills (Python, Typescript/NodeJS, .NET/Java) and familiarity with building and consuming REST APIs. Infrastructure as a code (IAAS) service like AWS Cloud formation, CDK knowledge Deep understanding of data ingestion pipelines, index management, and search query optimization. Experience working with unstructured and semi-structured data in real-world enterprise settings. Ability to design for scale, security, and multi-tenant environment. What’s in it for you? Join an ever-growing, market disrupting, global company where the teams – comprised of the best of the best – work in a fast-paced, collaborative, and creative environment! As the market leader, every day at NICE is a chance to learn and grow, and there are endless internal career opportunities across multiple roles, disciplines, domains, and locations. If you are passionate, innovative, and excited to constantly raise the bar, you may just be our next NICEr! Enjoy NICE-FLEX! At NICE, we work according to the NICE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere. Reporting into: Tech Manager, Engineering, CX Role Type: Individual Contributor

Posted 4 days ago

Apply

3.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Are you a passionate Spark and Scala developer looking for an exciting opportunity to work on cutting-edge big data projects? Look no further! Delhivery is seeking a talented and motivated Spark & Scala Expert to join our dynamic team. Responsibilities: Develop and optimize Spark applications to process large-scale data efficiently Collaborate with cross-functional teams to design and implement data-driven solutions Troubleshoot and resolve performance issues in Spark jobs Stay up-to-date with the latest trends and advancements in Spark and Scala technologies. Requirements: Proficient in Redshift, data pipelines, Kafka, Real-time streaming, connectors, etc 3+ years of professional experience with Big Data systems, pipelines, and data processing Strong experience with Apache Spark, Spark Streaming, and Spark SQL Solid understanding of distributed systems, Databases, System design, and big data processing framework Familiarity with Hadoop ecosystem components (HDFS, Hive, HBase) is a plus Show more Show less

Posted 4 days ago

Apply

4.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Greetings from TCS!!! **TCS is Hiring for Hadoop Admin** Walkin Interview for Hadoop Admin in Hyderabad Walkin Interview Date: 21st June 2024(Saturday) Role: Hadoop Admin Desired Experience: 4- 10 Years Job Location: Hyderabad Job Description: 4+ Years of working experience in Hadoop and must have good exposure to Hive, Impala and Spark. Exposure to and strong working knowledge on distributed systems, Yarn, cluster Size(nodes, memory) Exposure to and strong working experience in Sqoop – Especially how to handle large volume by splitting into multiple chunks. Must have good work experience in File to Hadoop ingestion. Must have basic Understanding of Types of Serdes and storage(Parquet, Avro, Orc etc.) Must have good knowledge in SQL basics - Joins, Rank, Scenario based. Ability to grasp the ‘big picture’ of a solution by considering all potential options in impacted area. Aptitude to understand and adapt to newer technologies. Experience in managing and leading small development teams in an agile environment. The ability to work with teammates in a collaborative manner to achieve a mission. Presentation skills to prepare and present to large and small groups on technical and functional topics. Date of Walk-In: 21st June 2025 (Saturday) Time: 9:30 AM to 12:30 PM Location: Hyderabad Venue: TCS Deccan Park, Plot No.1, Hitech City Main Rd, Software Units Layout, HUDA Techno Enclave, Madhapur, Hyderabad, Telangana 500081 Show more Show less

Posted 4 days ago

Apply

7.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

#hiring Senior BigData Engineer for Pune location. Interested candidates can apply here or share your updated CV on:atika.m@teksands.ai Please find below position Details: Experience:7+ years Location:Pune position:BigData Engineer(Fulltime position) Skills:BigData, Hadoop,Python/Pyspark,ETL Job Description: Role: • Highly capable in learning new technologies & frameworks and implementing them as per the project requirements by adhering to quality standards • Experience in all phases of data warehouse development lifecycle, from gathering requirements to testing, implementation, and support • Adept in analyzing information system needs, evaluating end-user requirements, custom designing solutions and troubleshooting information systems • Develop and implement data pipelines that extracts, transforms, and loads data into an information product that helps to inform the organization in reaching strategic goals • Investigate and analyze alternative solutions for data storage, processing etc. to ensure most streamlined approaches are implemented • Ensure operational resiliency of existing data pipelines by monitoring and resolving any issues. • Communicate, collaborate and work effectively in a global environment. • Lead projects through design, implementation, automation, and maintenance for large scale ETL processes supporting multiple business units • Leverage industry best practices including proper use of source control, participation in code reviews, data validation and testing • Implement best practices in Data Governance to ensure the data is available, usable and secure according to internal policies • Mentor other Data Engineers on the team and ensure the efficient execution of their duties • Assist in leading the development team and serve as a technical resource for team members • Leverage new technologies and approaches to innovating with increasingly large data sets • Ability to write algorithms with different rules • Data warehousing principles & concepts and modification of existing data warehouse structures #applynow#sharecv Show more Show less

Posted 4 days ago

Apply

2.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Job description: Job Description Do Research, design, develop, and modify computer vision and machine learning. algorithms and models, leveraging experience with technologies such as Caffe, Torch, or TensorFlow. - Shape product strategy for highly contextualized applied ML/AI solutions by engaging with customers, solution teams, discovery workshops and prototyping initiatives. - Help build a high-impact ML/AI team by supporting recruitment, training and development of team members. - Serve as evangelist by engaging in the broader ML/AI community through research, speaking/teaching, formal collaborations and/or other channels. Knowledge & Abilities: - Designing integrations of and tuning machine learning & computer vision algorithms - Research and prototype techniques and algorithms for object detection and recognition - Convolutional neural networks (CNN) for performing image classification and object detection. - Familiarity with Embedded Vision Processing systems - Open source tools & platforms - Statistical Modeling, Data Extraction, Analysis, - Construct, train, evaluate and tune neural networks Mandatory Skills: One or more of the following: Java, C++, Python Deep Learning frameworks such as Caffe OR Torch OR TensorFlow, and image/video vision library like OpenCV, Clarafai, Google Cloud Vision etc Supervised & Unsupervised Learning Developed feature learning, text mining, and prediction models (e.g., deep learning, collaborative filtering, SVM, and random forest) on big data computation platform (Hadoop, Spark, HIVE, and Tableau) *One or more of the following: Tableau, Hadoop, Spark, HBase, Kafka Experience: - 2-5 years of work or educational experience in Machine Learning or Artificial Intelligence - Creation and application of Machine Learning algorithms to a variety of real-world problems with large datasets. - Building scalable machine learning systems and data-driven products working with cross functional teams - Working w/ cloud services like AWS, Microsoft, IBM, and Google Cloud - Working w/ one or more of the following: Natural Language Processing, text understanding, classification, pattern recognition, recommendation systems, targeting systems, ranking systems or similar Nice to Have: - Contribution to research communities and/or efforts, including publishing papers at conferences such as NIPS, ICML, ACL, CVPR, etc. Education: BA/BS (advanced degree preferable) in Computer Science, Engineering or related technical field or equivalent practical experience Wipro is an Equal Employment Opportunity employer and makes all employment and employment-related decisions without regard to a person's race, sex, national origin, ancestry, disability, sexual orientation, or any other status protected by applicable law Product and Services Sales Manager ͏ ͏ ͏ ͏ Mandatory Skills: Google Gen AI . Experience: 3-5 Years . Reinvent your world. We are building a modern Wipro. We are an end-to-end digital transformation partner with the boldest ambitions. To realize them, we need people inspired by reinvention. Of yourself, your career, and your skills. We want to see the constant evolution of our business and our industry. It has always been in our DNA - as the world around us changes, so do we. Join a business powered by purpose and a place that empowers you to design your own reinvention. Come to Wipro. Realize your ambitions. Applications from people with disabilities are explicitly welcome. Show more Show less

Posted 4 days ago

Apply

5.0 - 8.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Position- ETL Tester Location-Bengaluru, Chennai, Mumbai Experience-5-8 years Notice period- 0 to 30 days Experience in: ETL Tester for Big Data and Hadoop will be responsible for testing ETL processes within a Big Data environment, ensuring data quality and integrity. This role involves designing and executing test cases, identifying and reporting defects, and collaborating with developers and other team members. Proficiency in Hadoop technologies, ETL tools, and testing methodologies is crucial. Show more Show less

Posted 4 days ago

Apply

6.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

🧾 Job Title: Application Developer – Data Engineering 🕒 Experience: 4–6 Years 📅 Notice Period: Immediate to 20 Days 🔍 Job Summary: We are looking for a highly skilled Data Engineering Application Developer to join our dynamic team. You will be responsible for the design, development, and configuration of data-driven applications that align with key business processes. Your role will also include refining data workflows, optimize performance, and supporting business goals through scalable and reliable data solutions. 📌 Roles & Responsibilities: Independently develop and maintain data pipelines and ETL processes. Become a Subject Matter Expert (SME) in Data Engineering tools and practices. Collaborate with cross-functional teams to gather requirements and provide data-driven solutions. Actively participate in team discussions and contribute to problem-solving efforts. Create and maintain comprehensive technical documentation, including application specifications and user guides. Stay updated with industry best practices and continuously improve application and data processing performance. 🛠️ Professional & Technical Skills: ✅ Must-Have Skills: Proficiency in Data Engineering , PySpark , and Python Strong knowledge of ETL processes and data modeling Experience working with cloud platforms like AWS or Azure Hands-on expertise with SQL or NoSQL databases Familiarity with other programming languages such as Java ➕ Good-to-Have Skills: Knowledge of Big Data tools and frameworks (e.g., Hadoop, Hive, Kafka) Experience with CI/CD tools and DevOps practices Exposure to containerization tools like Docker or Kubernetes #DataEngineering #PySpark #PythonDeveloper #ETLDeveloper #BigDataJobs #DataEngineer #BangaloreJobs #PANIndiaJobs #AWS #Azure #SQL #NoSQL #CloudDeveloper #ImmediateJoiners #DataPipeline #Java #Kubernetes #SoftwareJobs #ITJobs #NowHiring #HiringAlert #ApplicationDeveloper #DataJobs #ITCareers #JoinOurTeam #TechJobsIndia #JobOpening #FullTimeJobs Show more Show less

Posted 4 days ago

Apply

7.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

There is an opportunity for HADOOP ADMIN IN Hyderabad for which WALKIN interview is there on 21ST JUN 25 between 9:00 AM TO 12:30 PM Venue : Deccan Park, Plot No.1, Hitech City Main Rd, Software Units Layout, HUDA Techno Enclave, Madhapur, Hyderabad, Telangana 500081 PLS SHARE below details to mamidi.p@tcs.com with subject line as HADOOP ADMIN 21st Jun 25 if you are interested Email id: Contact no: Total EXP: Preferred Location: CURRENT CTC: EXPECTED CTC: NOTICE PERIOD: CURRENT ORGANIZATION: HIGHEST QUALIFICATION THAT IS FULL TIME : HIGHEST QUALIFICATION UNIVERSITY: ANY GAP IN EDUCATION OR EMPLOYMENT: IF YES HOW MANY YEARS AND REASON FOR GAP: ARE U AVAILABLE FOR INTERVIEW ON 9TH JAN 25(YES/NO): We will share a mail to you by tom Night if you are shortlisted · 7+ Years of working experience in Hadoop and must have good exposure to Hive, Impala and Spark. · Exposure to and strong working knowledge on distributed systems, Yarn, cluster Size(nodes, memory) · Exposure to and strong working experience in Sqoop – Especially how to handle large volume by splitting into multiple chunks. · Must have good work experience in File to Hadoop ingestion. · Must have basic Understanding of Types of Serdes and storage(Parquet, Avro, Orc etc.) · Must have good knowledge in SQL basics - Joins, Rank, Scenario based. · Ability to grasp the ‘big picture’ of a solution by considering all potential options in impacted area. · Aptitude to understand and adapt to newer technologies. · Experience in managing and leading small development teams in an agile environment. · The ability to work with teammates in a collaborative manner to achieve a mission. · Presentation skills to prepare and present to large and small groups on technical and functional topics. Show more Show less

Posted 4 days ago

Apply

5.0 years

0 Lacs

India

On-site

Linkedin logo

Orion Innovation is a premier, award-winning, global business and technology services firm. Orion delivers game-changing business transformation and product development rooted in digital strategy, experience design, and engineering, with a unique combination of agility, scale, and maturity. We work with a wide range of clients across many industries including financial services, professional services, telecommunications and media, consumer products, automotive, industrial automation, professional sports and entertainment, life sciences, ecommerce, and education. Responsibilities Build and maintain the infrastructure for data generation, collection, storage, and processing. Design, build, and maintain scalable data pipelines to support data flows from various sources to data warehouses and analytics platforms. Develop and manage ETL (Extract, Transform, Load) processes to ensure data is accurately transformed and loaded into the target systems. Design and optimize databases, ensuring performance, security, and scalability of data storage solutions. Integrate data from various internal and external sources into unified data systems for analysis. Work with big data technologies (e.g., Hadoop, Spark) to process and manage large volumes of structured and unstructured data. Implement and manage cloud-based data solutions using Azure & Fabric platforms. Ensure data quality by developing validation processes and monitoring for anomalies and inconsistencies. Work closely with data scientists, analysts, and other stakeholders to meet their data needs and ensure smooth data operations. Automate repetitive data processes and workflows to improve efficiency and reduce manual effort. Implement and enforce data security protocols, ensuring compliance with industry standards and regulations. Optimize data queries and system performance to handle large data sets efficiently. Create and maintain clear documentation of data pipelines, infrastructure, and processes for transparency and training. Set up monitoring tools to ensure data systems are functioning smoothly and troubleshoot any issues that arise. Stay updated with emerging trends and tools in data engineering and continuously improve data infrastructure. Qualifications Azure Solution Architect certification preferred Microsoft Fabric Analytics Engineer Associate certification preferred 5+ years of architecture experience in the technology Operations/Development using Azure technologies. Strong experience in Python and PySpark required Strong understanding and experience in building lake houses, data lakes and data warehouses Strong experience in Microsoft Fabric technologies. Good understanding of the Scrum Agile methodology Strong experience with Azure Cloud technologies Solid knowledge of SQL, and non-relational (NoSQL) databases Solid knowledge of networking, firewalls, load balancers etc. Exceptional communication skills and the ability to communicate appropriately with technical teams Familiarity with at least one of the following code build/deploy tools Azure DevOps or GitHub Actions Orion is an equal opportunity employer, and all qualified applicants will receive consideration for employment without regard to race, color, creed, religion, sex, sexual orientation, gender identity or expression, pregnancy, age, national origin, citizenship status, disability status, genetic information, protected veteran status, or any other characteristic protected by law. Candidate Privacy Policy Orion Systems Integrators, LLC And Its Subsidiaries And Its Affiliates (collectively, “Orion,” “we” Or “us”) Are Committed To Protecting Your Privacy. This Candidate Privacy Policy (orioninc.com) (“Notice”) Explains What information we collect during our application and recruitment process and why we collect it; How we handle that information; and How to access and update that information. Your use of Orion services is governed by any applicable terms in this notice and our general Privacy Policy. Show more Show less

Posted 4 days ago

Apply

0 years

0 Lacs

Indore, Madhya Pradesh, India

On-site

Linkedin logo

Company Description Logikview Technologies Pvt. Ltd. is a forward-thinking data analytics services firm. As a strategic partner, we provide a comprehensive range of analytics services to our clients' business units or analytics teams. From setting up big data and analytics infrastructure to performing data transformations and building advanced predictive analytical engines, Logikview supports clients throughout their analytics journey. We offer ready-to-deploy productized analytics solutions in domains such as retail, telecom, education, and healthcare. Role Description We are seeking a full-time Technical Lead for an on-site role in Indore. As a Technical Lead, you will oversee a team of engineers, manage project timelines, and ensure the successful delivery of analytics solutions. Day-to-day tasks include designing and implementing data models, developing and optimizing data pipelines, and collaborating with cross-functional teams to address technical challenges. You will also be responsible for code reviews, mentoring team members, and staying updated with the latest technological advancements. Qualifications Proficiency in data modeling, data warehousing, and ETL processes Experience with big data technologies such as Hadoop, Spark, and Kafka Knowledge of programming languages like Python, Java, and SQL Strong understanding of machine learning algorithms and predictive analytics Excellent problem-solving skills and the ability to troubleshoot technical issues Proven experience in team leadership and project management Bachelor's or Master's degree in Computer Science, Information Technology, or a related field Relevant certifications in data analytics or big data technologies are a plus Show more Show less

Posted 4 days ago

Apply

175.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

At American Express, our culture is built on a 175-year history of innovation, shared values and Leadership Behaviors, and an unwavering commitment to back our customers, communities, and colleagues. As part of Team Amex, you’ll experience this powerful backing with comprehensive support for your holistic well-being and many opportunities to learn new skills, develop as a leader, and grow your career. Here, your voice and ideas matter, your work makes an impact, and together, you will help us define the future of American Express Smart Monitoring is an industry-leading and an award-winning Risk Monitoring/Control Testing platform owned and managed by the Global Risk Compliance and it leverages high technology, automation, and data science to detect, predict and prevent risks. Its patent-pending approach uniquely combines advances in data science and technology (AI, machine learning, cloud computing) to transform risk management. The Smart Monitoring Center of Excellence is a comprised of group of experts that leverage the Smart Monitoring platform to build and manage Key Risk Indicators (KRIs) and Automated Control Tests (ACTs) that monitor risks and detect control failure across AXP, supporting Business Units and Staff Groups, Product Lines and Processes. Smart Monitoring Center of Excellence team supports the businesses with a mission to enable business growth and objectives while maintaining a strong control environment. We are seeking a Data Scientist to join this exciting opportunity to grow Smart Monitoring COE multi-folds. As a member of SM COE, the incumbent will be responsible for identifying opportunities to apply new and innovative ways to monitor risks through KRIs/ACTs and execute appropriate strategies in partnership with Business, OE, Compliance, and other stakeholder teams. Key activities for the role will include: Lead the design and implementation of NLP & GenAI based solutions for real time identification of Key Risk Indicators. Owning the architecture and roadmap of the models and tools from ideation to productionizing Lead a team of data scientists, providing mentorship, performance coaching and technical guidance to build domain depth and deliver excellence Champion governance, interpretability of models from validation point of view Lead R&D efforts to leverage external data (social forums, etc.) to generate insights for operational/compliance risks Provide rigorous analytics solutions to support critical business functions and support machine learning solutions prototyping Collaborate with Model consumers, data Engineers, and all related stakeholders to ensure precise implementation of solutions Qualifications: · Masters/PhD in a quantitative field (Computer Science, Statistics, Mathematics, Operation Research, etc.) with hands-on experience leveraging sophisticated analytical and machine learning techniques. Strong preference for candidates with 5-6+ years of working experience driving business results · Demonstrated ability to frame business problems into machine learning problems, leverage external thinking and tools (from academia and/or other industries) to engineer a solvable solution to deliver business insights and optimal control policy · Creativity to go beyond the status-quo to construct and deliver the best solution to the problem, ability and comfort with working independently and making key decisions on projects · Deep understanding of machine learning/statistical algorithms such as time series analysis and outlier detection, neural networks/deep learning, boosting and reinforcement learning. Experience with data visualization a plus · Expertise in an analytical language (Python, R, or the equivalent), and experience with databases (GCP, SQL, or the equivalent) · Prior experience working with Big Data tools and platforms (Hadoop, Spark, or the equivalent) · Experience in building NLP solutions and/or GEN AI are strongly preferred · Self-motivated with the ability to operate independently and handle multiple workstreams and ad-hoc tasks simultaneously. · Team player with strong relationship building, management and influencing skills · Strong verbal and written communication skills American Express is an equal opportunity employer and makes employment decisions without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran status, disability status, age, or any other status protected by law. Offer of employment with American Express is conditioned upon the successful completion of a background verification check, subject to applicable laws and regulations Show more Show less

Posted 4 days ago

Apply

5.0 years

0 Lacs

Coimbatore, Tamil Nadu, India

On-site

Linkedin logo

Exp:5+yrs NP: Imm-15 days Rounds: 3 Rounds (Virtual) Mandate Skills: Apache spark, hive, Hadoop, spark, scala, Databricks Job Description The Role Designing and building optimized data pipelines using cutting-edge technologies in a cloud environment to drive analytical insights. Constructing infrastructure for efficient ETL processes from various sources and storage systems. Leading the implementation of algorithms and prototypes to transform raw data into useful information. Architecting, designing, and maintaining database pipeline architectures, ensuring readiness for AI/ML transformations. Creating innovative data validation methods and data analysis tools. Ensuring compliance with data governance and security policies. Interpreting data trends and patterns to establish operational alerts. Developing analytical tools, programs, and reporting mechanisms Conducting complex data analysis and presenting results effectively. Preparing data for prescriptive and predictive modeling. Continuously exploring opportunities to enhance data quality and reliability. Applying strong programming and problem-solving skills to develop scalable solutions. Requirements Experience in the Big Data technologies (Hadoop, Spark, Nifi, Impala) 5+ years of hands-on experience designing, building, deploying, testing, maintaining, monitoring, and owning scalable, resilient, and distributed data pipelines. High proficiency in Scala/Java and Spark for applied large-scale data processing. Expertise with big data technologies, including Spark, Data Lake, and Hive. Show more Show less

Posted 4 days ago

Apply

5.0 - 10.0 years

15 - 30 Lacs

Bengaluru

Remote

Naukri logo

Hiring for US based Multinational Company (MNC) We are seeking a skilled and detail-oriented Data Engineer to join our team. In this role, you will design, build, and maintain scalable data pipelines and infrastructure to support business intelligence, analytics, and machine learning initiatives. You will work closely with data scientists, analysts, and software engineers to ensure that high-quality data is readily available and usable. Design and implement scalable, reliable, and efficient data pipelines for processing and transforming large volumes of structured and unstructured data. Build and maintain data architectures including databases, data warehouses, and data lakes. Collaborate with data analysts and scientists to support their data needs and ensure data integrity and consistency. Optimize data systems for performance, cost, and scalability. Implement data quality checks, validation, and monitoring processes. Develop ETL/ELT workflows using modern tools and platforms. Ensure data security and compliance with relevant data protection regulations. Monitor and troubleshoot production data systems and pipelines. Proven experience as a Data Engineer or in a similar role Strong proficiency in SQL and at least one programming language such as Python, Scala, or Java Experience with data pipeline tools such as Apache Airflow, Luigi, or similar Familiarity with modern data platforms and tools: Big Data: Hadoop, Spark Data Warehousing: Snowflake, Redshift, BigQuery, Azure Synapse Databases: PostgreSQL, MySQL, MongoDB Experience with cloud platforms (AWS, Azure, or GCP) Knowledge of data modeling, schema design, and ETL best practices Strong analytical and problem-solving skills

Posted 4 days ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Location: Hyderabad Budget: 3.5x Notice : Immediate joiners Requirements : • BS degree in computer science, computer engineering or equivalent • 5-9 years of experience delivering enterprise software solutions • Familiar with Spark, Scala, Python, AWS Cloud technologies • 2+ years of experience across multiple Hadoop / Spark technologies such as Hadoop, MapReduce, HDFS, HBase, Hive, Flume, Sqoop, Kafka, Scala • Flair for data, schema, data model, how to bring efficiency in big data related life cycle. • Experience with Agile Development methodologies. • Experience with data ingestion and transformation • Have understanding for secure application development methodologies. • Experience in with Airflow and Python will be preferred. • Understanding of automated QA needs related to Big data technology. • Strong object-oriented design and analysis skills • Excellent written and verbal communication skills Responsibilities • Utilize your software engineering skills including Spark, Python, Scala to Analyze disparate, complex systems and collaboratively design new products and services • Integrate new data sources and tools • Implement scalable and reliable distributed data replication strategies • Collaborate with other teams to design and develop and deploy data tools that support both operations and product use cases • Perform analysis of large data sets using components from the Hadoop ecosystem • Own product features from the development, testing through to production deployment • Evaluate big data technologies and prototype solutions to improve our data processing architecture • Automate different pipelines Show more Show less

Posted 4 days ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Global Technology Solutions (GTS) at ResMed is a division dedicated to creating innovative, scalable, and secure platforms and services for patients, providers, and people across ResMed. The primary goal of GTS is to accelerate well-being and growth by transforming the core, enabling patient, people, and partner outcomes, and building future-ready operations. The strategy of GTS focuses on aligning goals and promoting collaboration across all organizational areas. This includes fostering shared ownership, developing flexible platforms that can easily scale to meet global demands, and implementing global standards for key processes to ensure efficiency and consistency. Role Overview As a Data Engineering Lead, you will be responsible for overseeing and guiding the data engineering team in developing, optimizing, and maintaining our data infrastructure. You will play a critical role in ensuring the seamless integration and flow of data across the organization, enabling data-driven decision-making and analytics. Key Responsibilities Data Integration: Coordinate with various teams to ensure seamless data integration across the organization's systems. ETL Processes: Develop and implement efficient data transformation and ETL (Extract, Transform, Load) processes. Performance Optimization: Optimize data flow and system performance for enhanced functionality and efficiency. Data Security: Ensure adherence to data security protocols and compliance standards to protect sensitive information. Infrastructure Management: Oversee the development and maintenance of the data infrastructure, ensuring scalability and reliability. Collaboration: Work closely with data scientists, analysts, and other stakeholders to support data-driven initiatives. Innovation: Stay updated with the latest trends and technologies in data engineering and implement best practices. Qualifications Experience: Proven experience in data engineering, with a strong background in leading and managing teams. Technical Skills: Proficiency in programming languages such as Python, Java, and SQL, along with experience in big data technologies like Hadoop, Spark, and Kafka. Data Management: In-depth understanding of data warehousing, data modeling, and database management systems. Analytical Skills: Strong analytical and problem-solving skills with the ability to handle complex data challenges. Communication: Excellent communication and interpersonal skills, capable of working effectively with cross-functional teams. Education: Bachelor's or Master's degree in Computer Science, Engineering, or a related field. Why Join Us? Work on cutting-edge data projects and contribute to the organization's data strategy. Collaborative and innovative work environment that values creativity and continuous learning. If you are a strategic thinker with a passion for data engineering and leadership, we would love to hear from you. Apply now to join our team and make a significant impact on our data-driven journey. Joining us is more than saying “yes” to making the world a healthier place. It’s discovering a career that’s challenging, supportive and inspiring. Where a culture driven by excellence helps you not only meet your goals, but also create new ones. We focus on creating a diverse and inclusive culture, encouraging individual expression in the workplace and thrive on the innovative ideas this generates. If this sounds like the workplace for you, apply now! We commit to respond to every applicant. Show more Show less

Posted 4 days ago

Apply

Exploring Hadoop Jobs in India

The demand for Hadoop professionals in India has been on the rise in recent years, with many companies leveraging big data technologies to drive business decisions. As a job seeker exploring opportunities in the Hadoop field, it is important to understand the job market, salary expectations, career progression, related skills, and common interview questions.

Top Hiring Locations in India

  1. Bangalore
  2. Mumbai
  3. Pune
  4. Hyderabad
  5. Chennai

These cities are known for their thriving IT industry and have a high demand for Hadoop professionals.

Average Salary Range

The average salary range for Hadoop professionals in India varies based on experience levels. Entry-level Hadoop developers can expect to earn between INR 4-6 lakhs per annum, while experienced professionals with specialized skills can earn upwards of INR 15 lakhs per annum.

Career Path

In the Hadoop field, a typical career path may include roles such as Junior Developer, Senior Developer, Tech Lead, and eventually progressing to roles like Data Architect or Big Data Engineer.

Related Skills

In addition to Hadoop expertise, professionals in this field are often expected to have knowledge of related technologies such as Apache Spark, HBase, Hive, and Pig. Strong programming skills in languages like Java, Python, or Scala are also beneficial.

Interview Questions

  • What is Hadoop and how does it work? (basic)
  • Explain the difference between HDFS and MapReduce. (medium)
  • How do you handle data skew in Hadoop? (medium)
  • What is YARN in Hadoop? (basic)
  • Describe the concept of NameNode and DataNode in HDFS. (medium)
  • What are the different types of join operations in Hive? (medium)
  • Explain the role of the ResourceManager in YARN. (medium)
  • What is the significance of the shuffle phase in MapReduce? (medium)
  • How does speculative execution work in Hadoop? (advanced)
  • What is the purpose of the Secondary NameNode in HDFS? (medium)
  • How do you optimize a MapReduce job in Hadoop? (medium)
  • Explain the concept of data locality in Hadoop. (basic)
  • What are the differences between Hadoop 1 and Hadoop 2? (medium)
  • How do you troubleshoot performance issues in a Hadoop cluster? (advanced)
  • Describe the advantages of using HBase over traditional RDBMS. (medium)
  • What is the role of the JobTracker in Hadoop? (medium)
  • How do you handle unstructured data in Hadoop? (medium)
  • Explain the concept of partitioning in Hive. (medium)
  • What is Apache ZooKeeper and how is it used in Hadoop? (advanced)
  • Describe the process of data serialization and deserialization in Hadoop. (medium)
  • How do you secure a Hadoop cluster? (advanced)
  • What is the CAP theorem and how does it relate to distributed systems like Hadoop? (advanced)
  • How do you monitor the health of a Hadoop cluster? (medium)
  • Explain the differences between Hadoop and traditional relational databases. (medium)
  • How do you handle data ingestion in Hadoop? (medium)

Closing Remark

As you navigate the Hadoop job market in India, remember to stay updated on the latest trends and technologies in the field. By honing your skills and preparing diligently for interviews, you can position yourself as a strong candidate for lucrative opportunities in the big data industry. Good luck on your job search!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies