Home
Jobs

368 Sqoop Jobs - Page 12

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

7.0 years

0 Lacs

Bangalore Urban, Karnataka, India

On-site

Linkedin logo

Greetings from TCS!! TCS is Hiring for Data Analyst Interview Mode: Virtual Required Experience: 7-18 years Work location: PAN India Strong knowledge of: Data processing software and strategies Big Data, information retrieval, data mining SQL 4+ years of experience with cloud platforms and customer facing projects. Strong ability to successfully interface (verbal and written) with clients in a concise manner while managing expectations at both executive and technical levels. General Data Platform & Data Lakes Relational & Non-Relation Databases Streaming and Batch Pipelines SQL Engines. Possible options: MySQL SQL Server PostgreSQL NoSQL Engines. Possible options: MongoDB Cassandra HBase Dynamo Redis Google Cloud Data Services Cloud SQL BigQuery Dataflow Dataproc Bigtable Composer Cloud Functions Python Hadoop Ecosystem / Apache Softwares Spark Beam Hive Airflow Sqoop Oozie Code Repositories / CICD tools. Possible options: Github Cloud Source Repositories GitLab Azure DevOps If interested kindly send your updated CV and below mentioned details through DM/E-mail: srishti.g2@tcs.com Name: E-mail ID: Contact Number: Highest qualification: Preferred Location: Highest qualification university: Current organization: Total, years of experience: Relevant years of experience: Any gap: Mention-No: of months/years (career/ education): If any then reason for gap: Is it rebegin: Previous organization name: Current CTC: Expected CTC: Notice Period: Have you worked with TCS before (Permanent / Contract ) : Show more Show less

Posted 3 weeks ago

Apply

15.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Position: DAML Head - Solution Architect / Technical Delivery Experience: 15+ Years Location: Noida Job Summary: The DAML Head - Solution Architect / Technical Delivery will be responsible for leading the design and delivery of advanced data management and analytics solutions. This role involves overseeing the creation of modern data warehouses, business intelligence systems, and cutting-edge analytics platforms, with a strong emphasis on AI/ML and Generative AI technologies. The ideal candidate will possess significant experience in Big Data, program management, and senior-level stakeholder engagement. Key Responsibilities Architectural Leadership: Lead the architectural design and development of multi-tenant modern data warehouses, business intelligence systems, and analytics platforms, including AI/ML and Generative AI components. Ensure the platform’s security, data isolation, quality, integrity, extensibility, adaptability, scalability, availability, and understandability. Big Data and AI/ML Delivery: Oversee the delivery of Big Data and AI/ML projects, ensuring alignment with architectural standards and business objectives. Solution Development: Architect scalable, performance-oriented solutions using Big Data technologies and traditional ETL tools, incorporating AI/ML and Generative AI technologies where applicable. Manage logical and physical data models for data warehousing (DW) and OLAP systems. Technology Evaluation: Lead the evaluation and selection of technology products to achieve strategic business intelligence and data management goals, including AI/ML and Generative AI technologies. Stakeholder Engagement: Facilitate high-level discussions with stakeholders, including CXOs and tech leaders within customer and AWS ecosystems, to refine software requirements and provide guidance on technical components, frameworks, and interfaces. Program and People Management: Demonstrate strong program management skills, overseeing multiple projects and ensuring timely delivery. Manage and mentor team members, including junior developers and team leads, ensuring adherence to best practices and fostering professional growth. Documentation and Communication: Develop and maintain comprehensive technical design documentation related to data warehouse architecture and systems. Communicate effectively with cross-functional teams to resolve issues, manage changes in scope, and ensure successful project execution. Infrastructure Planning: Assess data volumes and customer reporting SLAs, and provide recommendations for infrastructure sizing and orchestration solutions. Skill Requirements Experience: Minimum of 15 years in data management and analytics roles, with substantial experience in Big Data solutions architecture. At least 10 years of experience in Big Data delivery roles, including hands-on experience with AI/ML and Generative AI technologies. Technical Expertise: Proficiency with Hadoop distributions (e.g., Hortonworks, Cloudera), and related technologies (e.g., Kafka, Spark, Cloud Data Flow, Pig, Hive, Sqoop, Oozie). Experience with RDBMS (e.g., MySQL, Oracle), NoSQL databases, and ETL/ELT tools (e.g., Informatica Power Center, Scoop). Experience with large-scale cluster installation and deployment. Analytical and Management Skills: Strong analytical and problem-solving skills, with the ability to develop multiple solution options. Proven program management capabilities, with a track record of managing complex projects and leading cross-functional teams. Knowledge: Deep understanding of Data Engineering, Data Management, Data Science, and AI/ML principles, including Generative AI. Familiarity with design and architectural patterns, as well as cloud-based deployment models. Knowledge of Big Data security concepts and tools, including Kerberos, Ranger, and Knox. Show more Show less

Posted 3 weeks ago

Apply

5.0 - 10.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

hackajob is collaborating with Wipro to connect them with exceptional tech professionals for this role. Title: Data Engineer Requisition ID: 64694 City: Pune Country/Region: IN Wipro Limited (NYSE: WIT, BSE: 507685, NSE: WIPRO) is a leading technology services and consulting company focused on building innovative solutions that address clients’ most complex digital transformation needs. Leveraging our holistic portfolio of capabilities in consulting, design, engineering, and operations, we help clients realize their boldest ambitions and build future-ready, sustainable businesses. With over 230,000 employees and business partners across 65 countries, we deliver on the promise of helping our customers, colleagues, and communities thrive in an ever-changing world. For additional information, visit us at www.wipro.com. Job Description Role Purpose The purpose of the role is to support process delivery by ensuring daily performance of the Production Specialists, resolve technical escalations and develop technical capability within the Production Specialists. ͏ Key skills - Azure Data Factory (primary) , Azure Data bricks Spark (PySpark, SQL) Must-have skills Cloud certified in one of these categories Azure Data Engineer Azure Data Factory , Azure Data bricks Spark (PySpark or scala), SQL, DATA Ingestion, Curation Semantic Modelling/ Optimization of data model to work within Rahona Experience in Azure ingestion from on-prem source, e.g. mainframe, SQL server, Oracle. Experience in Sqoop / Hadoop Microsoft Excel (for metadata files with requirements for ingestion) Any other certificate in Azure/AWS/GCP and data engineering hands-on experience in cloud Strong Programming skills with at least one of Python, Scala or Java Strong SQL skills ( T-SQL or PL-SQL) Data files movement via mailbox Source-code versioning/promotion tools, e.g. Git/Jenkins Orchestration tools, e.g. Autosys, Oozie Source-code versioning with Git. Nice-to-have skills Experience working with mainframe files Experience in Agile environment, JIRA/Confluence tool ͏ Handle technical escalations through effective diagnosis and troubleshooting of client queries Manage and resolve technical roadblocks/ escalations as per SLA and quality requirements If unable to resolve the issues, timely escalate the issues to TA & SES Provide product support and resolution to clients by performing a question diagnosis while guiding users through step-by-step solutions Troubleshoot all client queries in a user-friendly, courteous and professional manner Offer alternative solutions to clients (where appropriate) with the objective of retaining customers’ and clients’ business Organize ideas and effectively communicate oral messages appropriate to listeners and situations Follow up and make scheduled call backs to customers to record feedback and ensure compliance to contract SLA’s ͏ Build people capability to ensure operational excellence and maintain superior customer service levels of the existing account/client Mentor and guide Production Specialists on improving technical knowledge Collate trainings to be conducted as triage to bridge the skill gaps identified through interviews with the Production Specialist Develop and conduct trainings (Triages) within products for production specialist as per target Inform client about the triages being conducted Undertake product trainings to stay current with product features, changes and updates Enroll in product specific and any other trainings per client requirements/recommendations Identify and document most common problems and recommend appropriate resolutions to the team Update job knowledge by participating in self learning opportunities and maintaining personal networks ͏ Deliver No Performance Parameter Measure 1 Process No. of cases resolved per day, compliance to process and quality standards, meeting process level SLAs, Pulse score, Customer feedback, NSAT/ ESAT 2 Team Management Productivity, efficiency, absenteeism 3 Capability development Triages completed, Technical Test performance Key skills - Azure Data Factory (primary) , Azure Data bricks Spark (PySpark, SQL) Experience - 5 to 10 Years Must-have skills Cloud certified in one of these categories Azure Data Engineer Azure Data Factory , Azure Data bricks Spark (PySpark or scala), SQL, DATA Ingestion, Curation Semantic Modelling/ Optimization of data model to work within Rahona Experience in Azure ingestion from on-prem source, e.g. mainframe, SQL server, Oracle. Experience in Sqoop / Hadoop Microsoft Excel (for metadata files with requirements for ingestion) Any other certificate in Azure/AWS/GCP and data engineering hands-on experience in cloud Strong Programming skills with at least one of Python, Scala or Java Strong SQL skills ( T-SQL or PL-SQL) Data files movement via mailbox Source-code versioning/promotion tools, e.g. Git/Jenkins Orchestration tools, e.g. Autosys, Oozie Source-code versioning with Git. Nice-to-have skills Experience working with mainframe files Experience in Agile environment, JIRA/Confluence to Show more Show less

Posted 3 weeks ago

Apply

8.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Manager, Business Analyst – C1 Employment Type: Permanent Location: Chennai Responsible Functions Gen AI: Expertise in leveraging advanced AI technologies to analyze business processes, identify automation and optimization opportunities, and drive data-driven decision-making. Capability to collaborate with stakeholders to translate business needs into AI solutions, ensuring seamless integration and maximizing operational efficiency, productivity, and innovation. Product Vision & Strategy: Perform market analysis to understand market landscape including competitor analysis, trends and customer needs to help define and communicate product vision & strategy aligning with company objectives Stakeholder Engagement: Interact with diversified stakeholders to conduct JAD sessions and use variety of techniques to elicit, elaborate, analyze and validate client requirements. Interact with Business team to conduct product demonstrations, evaluate, prioritize and build new features and functions. Requirements Management: Analyze and develop Business Requirements Document (BRD) and Functional Specification Document (FSD) for client/business reference. Translate Business Requirements to User Stories, Prioritize the Backlog and conduct Scrum Ceremonies for development consumption. Functional Solution Development: Responsible for End-to-End Functional Solutioning. Analyze the Business problem and validate the key business requirements to create a complete picture of workflows and technical requirements fulfilled by existing and proposed software. Identify, define and evaluate potential product solutions, including off-the-shelf and open-source components, and system architecture to ensure that they meet business requirements. Communication & Collaboration: Be a strong interface between Business and Internal stakeholders. Collaborate with development team (including architecture, coding & testing teams) to produce/maintain additional product and project deliverables like technical design, testing & program specifications, additional test scenarios and project plan. Proactively manage expectation regarding roadblocks, in the critical path to help ensure successful delivery of the solution. Business Value: Comprehend the business value of the fundamental solution being developed, assess the fitment within the overall architecture, risks and technical feasibility. Drive business metrics that will help optimize business & also deep dive into data for insights as required Team Management: Manage a small team of Business Analysts, define clear goals and be accountable for the functional solution delivered by the team. Participate in recruitment and building a strong BA team. RFP Support: Participate in Request for Information/Proposal handling and support with responses & solutions to questions or information requested. Client/Business Training: Work with Technical Writers to create training material and handle product/platform training sessions with diversified stakeholders. Essential Functions Multi-disciplinary technologist who enjoys designing, executing and selling Healthcare solutions, and being on the front-line of client communications and selling strategies Deep understanding of the US Healthcare value chain and key impact drivers [Payer and/or Provider] Knowledgeable and cognizant of how data management and science is used to solve organizational problems in the healthcare context Hands-on experience in two (or more) areas of the data and analytics technical domains - Enterprise cloud data warehousing, integration, preparation, and visualization along with artificial intelligence, machine learning, data science, data modeling, data management, and data governance Strong problem solving and analytical skills: Ability to break down a vague business problem into structured data analysis approaches & ability to work with incomplete information and take judgment-driven decisions based on experience. Experience ramping up analytics programs with new clients, including integrating with work of other teams to ensure analytics approach is aligned with operations as well as engage in consultative selling Primary Internal Interactions Review with the Product Manager & AVP for improvements in the product development lifecycle Assessment meeting with VP & above for additional product development features. Manage a small team of business analyst to lead the requirements effort for product development Primary External Interactions Communicate with onshore stakeholder & Executive Team Members. Help the Product Management Group set the product roadmap & help in identifying future sellable product features. Client Interactions to better understands expectations & streamline solutions. If required should be a bridge between the client and the technology teams. Skills Technical Skills Required Skills - SME in US Healthcare with deep Knowledge on Claims & Payments Lifecycle with at least 8 years of experience working with various US Healthcare Payer clients Skills Must Have Excellent understanding of Software Development Life Cycle & Methodologies like Agile Scrum, Waterfall etc. Strong experience in requirements elicitation techniques, functional documentation, stakeholder management, business solutions validation and user walkthroughs. Strong documentation skills to create BRD, FSD, Process Flows, User Stories Strong presentation skills. Good knowledge of SQL. Knowledge of tools like Azure Dev Ops, Jira, Visio, Draw.io etc. Experience in AI or Gen AI projects. Skills Nice To Have Development experience of 2 or more years will be good to have Experience on Big Data Tools – not limited to – Python, Spark + Python, HIVE, HBASE, Sqoop, CouchDB, MongoDB, MS SQL, Cassandra, Kafka Knowledge of Data Analysis Tools – (Online analytical processing (OLAP), ETL frameworks) Knowledge of Enterprise modeling tool and data integration platform (Erwin, Embarcadero, Informatica, Talend, SSIS, DataStage, Pentaho) Knowledge of Enterprise business intelligence platform (Tableau, PowerBI, Business Objects, Microstrategy, Cognos) Knowledge of Enterprise data warehousing platform (Oracle, Microsoft, DB2, Snowflake, AWS, Azure, Google Cloud Platform) Process Specific Skills Delivery Domain - Software Development – SDLC & Agile Certifications Business Domain - US Healthcare & Payer Analytics Payment Integrity Fraud, Waste & Abuse Claims Management Soft Skills Understanding of Healthcare business vertical and the business terms within Good analytical skills. Strong communication skills - oral and written Ability to work with various stakeholders across different geographical locations Should be able to function as an Individual Contributor as well if required. Strong aptitude to learn & implement healthcare solutions. Good Leadership Skills. Working Hours General Shift – 12PM to 9 PM Will be required to extend as per project release needs Education Requirements Master’s or bachelor’s degree from top tier colleges with good grades, preferably in a relevant field including Mathematics, Statistics, Computer Science or equivalent experience Show more Show less

Posted 3 weeks ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Bigdata, Hadoop frameworks, Hive Jira, DevOps tools, Git Excellent knowledge of Bigdata and Hadoop frameworks (Cloudera preferable). Hands on experience in Implementing & Automating Data ingestion solutions using Hadoop, Sqoop, Hive, Impala and Spark. Hands on experience of Linux/Unix Shell scripting(mandatory), SQL & ETL. Experience in Debugging, performance tuning, troubleshooting bigdata pipelines. Good to have knowledge of Service now, Jenkins, Git, Bitbucket, JIRA and DevOps tools. Experience in working in Agile Scrum methodology. Knowledge of Python and R can be a plus Relevant Experience :- 5+ years Show more Show less

Posted 3 weeks ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Senior Business Analyst/Lead Business Analyst – B1/B2 Employment Type: Permanent Location: Chennai Responsible Functions Product Vision & Strategy: Help with inputs on product features through market analysis to understand market landscape including competitor solutions, trends and customer needs. Stakeholder Engagement: Interact with diversified stakeholders to conduct JAD sessions and use variety of techniques to elicit, document, analyze and validate client requirements. Interface with Business team to conduct product demonstrations, evaluate, prioritize and build new features and functions. Requirements Management: Analyze and develop business requirement document (BRD) for client/business reference. Translate Business Requirements to user Stories to create, prioritize in backlog, sprint, DOD and releases using Jira for development consumption. Perform requirements review with external and internal stakeholders and resolve issues while suggesting corrective actions. Functional Solution Development: Responsible for end-to-end functional solution. Analyze the business problem and validate the key business requirements to create a complete picture of workflows and technical requirements fulfilled by existing and proposed software. Identify, define and evaluate potential product solutions, including off-the-shelf and open-source components, and system architecture to ensure that they meet business requirements. Communication & Collaboration: Act as a liaison between Business user and technical solutions/support groups to ensure proper communication between diversified teams. Collaborate with development team (including architecture, coding & testing teams) to produce/maintain additional product and project deliverables in technical design, testing & program specifications, additional test scenarios and project plan. Proactively manage expectation regarding roadblocks, in the critical path to help ensure successful delivery of the solution. Business Value: Comprehend the fundamental solution being developed/deployed – its business value & blueprint how it fits with the overall architecture, risks, and more. Drive business metrics that will help optimize business & also deep dive into data for insights as required Team Mentoring: Train & Mentor juniors in the team on a need basis Essential Functions Technologist who enjoys executing and selling Healthcare solutions. Being on the front-line of client communications. Good understanding of the US Healthcare value chain and key impact drivers [Payer and/or Provider] Knowledgeable and cognizant of how data management and science is used to solve organizational problems in the healthcare context Hands-on experience in at least one area of the data and analytics technical domains - Enterprise cloud data warehousing, integration, preparation, and visualization along with artificial intelligence, machine learning, data science, data modeling, data management, and data governance Strong problem solving and analytical skills: ability to break down a vague business problem into structured data analysis approaches & ability to work with incomplete information and take judgment-driven decisions based on experience. Primary Internal Interactions Review with the Product Manager & AVP for improvements in the product development lifecycle Assessment meeting with VP & above for additional product development features. Train & Mentor juniors in the team on a need basis. Primary External Interactions Communicate with onshore stakeholder & Executive Team Members. Help the Product Management Group set the product roadmap & help in identifying future sellable product features. Client Interactions to better understands expectations & streamline solutions. If required should be a bridge between the client and the technology teams. Skills Technical Skills Required Skills – Good Knowledge of US Healthcare with at least 3 years of experience working with various US Healthcare Payer clients Skills Must Have Good understanding of Software Development Life Cycle & Methodologies like Agile Scrum, Waterfall etc. Strong experience in requirements elicitation techniques, functional documentation, stakeholder management, business solutions validation and user walkthroughs. Strong documentation skills to create BRD, FSD, Process Flows, User Stories Strong presentation skills. Basic knowledge of SQL. Knowledge of tools like Jira, Visio, Draw.io etc. Skills Nice To Have Development experience of 1 or 2 years will be good to have Experience on Big Data Tools – not limited to – Python, Spark + Python, HIVE, HBASE, Sqoop, CouchDB, MongoDB, MS SQL, Cassandra, Kafka Knowledge of Data Analysis Tools – (Online analytical processing (OLAP), ETL frameworks) Knowledge of Enterprise modeling tool and data integration platform (Erwin, Embarcadero, Informatica, Talend, SSIS, DataStage, Pentaho) Knowledge of Enterprise business intelligence platform (Tableau, PowerBI, Business Objects, Microstrategy, Cognos) Knowledge of Enterprise data warehousing platform (Oracle, Microsoft, DB2, Snowflake, AWS, Azure, Google Cloud Platform) Process Specific Skills Delivery Domain - Software Development – SDLC & Agile Certifications Business Domain - US Healthcare Insurance & Payer Analytics Fraud, Waste & Abuse Payer Management Code Classification Management Soft Skills Understanding of Healthcare business vertical and the business terms within Good analytical skills. Strong communication skills - oral and verbal Ability to work with various stakeholders across different geographical locations Should be able to function as an Individual Contributor as well if required. Strong aptitude to learn & implement healthcare solutions. Ability to work independently Working Hours General Shift – 12PM to 9 PM Will be required to extend as per project release needs Education Requirements Master’s or Bachelor’s degree from top tier colleges with good grades, preferably in a relevant field including Mathematics, Statistics, Computer Science or equivalent experience Show more Show less

Posted 3 weeks ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Job Description We are seeking a skilled Data Engineer with extensive experience in the Hadoop ecosystem and a strong background in integrating and managing data from FLEXCUBE core banking systems. The ideal candidate will play a key role in designing, implementing, and optimizing our data pipelines, ensuring seamless data flow and analysis. Career Level - IC3 Responsibilities Key Responsibilities: Data Integration: Lead the integration of data from FLEXCUBE core banking systems into our Hadoop-based data infrastructure. Develop and maintain efficient data ingestion processes. Hadoop Ecosystem: Design, build, and optimize data pipelines within the Hadoop ecosystem, including HDFS, Sqoop, Unix shell scripting, Python coding and Spark. Data Modelling: Create and maintain data models, schemas, and structures to support data analysis and reporting requirements. ETL Processes: Develop Extract, Transform, Load (ETL) processes to cleanse, enrich, and transform data for downstream consumption. Data Quality: Implement data quality checks and monitoring processes to ensure the accuracy, completeness, and consistency of data. Performance Optimization: Optimize data processing and query performance within the Hadoop ecosystem. Data Security: Ensure data security and compliance with data privacy regulations during data handling and processing. Documentation: Maintain thorough documentation of data pipelines, transformations, and data flow processes. Collaboration: Collaborate with cross-functional teams, including FLEXCUBE consulting, data scientists, analysts, and business stakeholders, to understand data requirements and deliver actionable insights. Mastery of the FLEXCUBE 14.7 backend tables and data model is essential. Qualifications: Bachelor's degree in Computer Science, Information Technology, or a related field. 5 to 8 Years of proven experience in designing and implementing data solutions within the Hadoop ecosystem. Strong expertise in Hadoop components such as HDFS, Sqoop, Unix shell scripting, Python coding and Spark Experience with FLEXCUBE integration and data extraction. Proficiency in SQL and database systems. Knowledge of data modelling and ETL processes. Strong problem-solving and analytical skills. Excellent communication and teamwork abilities. Banking or financial industry experience is a plus. Certifications in Hadoop or related technologies are beneficial. Additional Information: This role offers an exciting opportunity to work on cutting-edge data projects and contribute to data-driven decision-making in the financial sector. The candidate should be prepared to work in a dynamic and collaborative environment. Candidates with a strong background in the Hadoop ecosystem and experience with FLEXCUBE integration are encouraged to apply. We are committed to fostering professional growth and providing opportunities for skill development. About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law. Show more Show less

Posted 3 weeks ago

Apply

4.0 - 5.0 years

3 - 7 Lacs

Mumbai, Pune, Chennai

Work from Office

Naukri logo

Job Category: IT Job Type: Full Time Job Location: Bangalore Chennai Mumbai Pune Exp:- 4 to 5 Years Location:- Pune/Mumbai/Bangalore/Chennai JD : Azure Data Engineer with QA: Must Have - Azure Data Bricks, Azure Data Factory, Spark SQL Years - 4-5 years of development experience in Azure Data Bricks Strong experience in SQL along with performing Azure Data bricks Quality Assurance. Understand complex data system by working closely with engineering and product teams Develop scalable and maintainable applications to extract, transform, and load data in various formats to SQL Server, Hadoop Data Lake or other data storage locations. Kind Note: Please apply or share your resume only if it matches the above criteria.

Posted 3 weeks ago

Apply

5.0 - 8.0 years

5 - 9 Lacs

Kolkata

Work from Office

Naukri logo

Role Purpose The purpose of this role is to design, develop and troubleshoot solutions/ designs/ models/ simulations on various softwares as per clients/ project requirements Do 1. Design and Develop solutions as per clients specifications Work on different softwares like CAD, CAE to develop appropriate models as per the project plan/ customer requirements Test the protype and designs produced on the softwares and check all the boundary conditions (impact analysis, stress analysis etc) Produce specifications and determine operational feasibility by integrating software components into a fully functional software system Create a prototype as per the engineering drawings & outline CAD model is prepared Perform failure effect mode analysis (FMEA) for any new requirements received from the client Provide optimized solutions to the client by running simulations in virtual environment Ensure software is updated with latest features to make it cost effective for the client Enhance applications/ solutions by identifying opportunities for improvement, making recommendations and designing and implementing systems Follow industry standard operating procedures for various processes and systems as per the client requirement while modeling a solution on the software 2. Provide customer support and problem solving from time to time Perform defect fixing raised by the client or software integration team while solving the tickets raised Develop software verification plans and quality assurance procedures for the customer Troubleshoot, debug and upgrade existing systems on time & with minimum latency and maximum efficiency Deploy programs and evaluate user feedback for adequate resolution with customer satisfaction Comply with project plans and industry standards 3. Ensure reporting & documentation for the client Ensure weekly, monthly status reports for the clients as per requirements Maintain documents and create a repository of all design changes, recommendations etc Maintain time-sheets for the clients Providing written knowledge transfer/ history of the project Deliver No. Performance Parameter Measure 1. Design and develop solutions Adherence to project plan/ schedule, 100% error free on boarding & implementation, throughput % 2. Quality & CSAT On-Time Delivery, minimum corrections, first time right, no major defects post production, 100% compliance of bi-directional traceability matrix, completion of assigned certifications for skill upgradation 3. MIS & Reporting 100% on time MIS & report generation Mandatory Skills: StreamSets. Experience5-8 Years.

Posted 3 weeks ago

Apply

5.0 - 8.0 years

7 - 11 Lacs

Bengaluru

Work from Office

Naukri logo

Role Purpose The purpose of this role is to design, test and maintain software programs for operating systems or applications which needs to be deployed at a client end and ensure its meet 100% quality assurance parameters Do 1. Instrumental in understanding the requirements and design of the product/ software Develop software solutions by studying information needs, studying systems flow, data usage and work processes Investigating problem areas followed by the software development life cycle Facilitate root cause analysis of the system issues and problem statement Identify ideas to improve system performance and impact availability Analyze client requirements and convert requirements to feasible design Collaborate with functional teams or systems analysts who carry out the detailed investigation into software requirements Conferring with project managers to obtain information on software capabilities 2. Perform coding and ensure optimal software/ module development Determine operational feasibility by evaluating analysis, problem definition, requirements, software development and proposed software Develop and automate processes for software validation by setting up and designing test cases/scenarios/usage cases, and executing these cases Modifying software to fix errors, adapt it to new hardware, improve its performance, or upgrade interfaces. Analyzing information to recommend and plan the installation of new systems or modifications of an existing system Ensuring that code is error free or has no bugs and test failure Preparing reports on programming project specifications, activities and status Ensure all the codes are raised as per the norm defined for project / program / account with clear description and replication patterns Compile timely, comprehensive and accurate documentation and reports as requested Coordinating with the team on daily project status and progress and documenting it Providing feedback on usability and serviceability, trace the result to quality risk and report it to concerned stakeholders 3. Status Reporting and Customer Focus on an ongoing basis with respect to project and its execution Capturing all the requirements and clarifications from the client for better quality work Taking feedback on the regular basis to ensure smooth and on time delivery Participating in continuing education and training to remain current on best practices, learn new programming languages, and better assist other team members. Consulting with engineering staff to evaluate software-hardware interfaces and develop specifications and performance requirements Document and demonstrate solutions by developing documentation, flowcharts, layouts, diagrams, charts, code comments and clear code Documenting very necessary details and reports in a formal way for proper understanding of software from client proposal to implementation Ensure good quality of interaction with customer w.r.t. e-mail content, fault report tracking, voice calls, business etiquette etc Timely Response to customer requests and no instances of complaints either internally or externally Deliver No. Performance Parameter Measure 1. Continuous Integration, Deployment & Monitoring of Software 100% error free on boarding & implementation, throughput %, Adherence to the schedule/ release plan 2. Quality & CSAT On-Time Delivery, Manage software, Troubleshoot queries, Customer experience, completion of assigned certifications for skill upgradation 3. MIS & Reporting 100% on time MIS & report generation Mandatory Skills: PySpark. Experience5-8 Years.

Posted 3 weeks ago

Apply

5.0 - 8.0 years

4 - 8 Lacs

Pune

Work from Office

Naukri logo

Role Purpose The purpose of the role is to support process delivery by ensuring daily performance of the Production Specialists, resolve technical escalations and develop technical capability within the Production Specialists. Do Oversee and support process by reviewing daily transactions on performance parameters Review performance dashboard and the scores for the team Support the team in improving performance parameters by providing technical support and process guidance Record, track, and document all queries received, problem-solving steps taken and total successful and unsuccessful resolutions Ensure standard processes and procedures are followed to resolve all client queries Resolve client queries as per the SLA’s defined in the contract Develop understanding of process/ product for the team members to facilitate better client interaction and troubleshooting Document and analyze call logs to spot most occurring trends to prevent future problems Identify red flags and escalate serious client issues to Team leader in cases of untimely resolution Ensure all product information and disclosures are given to clients before and after the call/email requests Avoids legal challenges by monitoring compliance with service agreements Handle technical escalations through effective diagnosis and troubleshooting of client queries Manage and resolve technical roadblocks/ escalations as per SLA and quality requirements If unable to resolve the issues, timely escalate the issues to TA & SES Provide product support and resolution to clients by performing a question diagnosis while guiding users through step-by-step solutions Troubleshoot all client queries in a user-friendly, courteous and professional manner Offer alternative solutions to clients (where appropriate) with the objective of retaining customers’ and clients’ business Organize ideas and effectively communicate oral messages appropriate to listeners and situations Follow up and make scheduled call backs to customers to record feedback and ensure compliance to contract SLA’s Build people capability to ensure operational excellence and maintain superior customer service levels of the existing account/client Mentor and guide Production Specialists on improving technical knowledge Collate trainings to be conducted as triage to bridge the skill gaps identified through interviews with the Production Specialist Develop and conduct trainings (Triages) within products for production specialist as per target Inform client about the triages being conducted Undertake product trainings to stay current with product features, changes and updates Enroll in product specific and any other trainings per client requirements/recommendations Identify and document most common problems and recommend appropriate resolutions to the team Update job knowledge by participating in self learning opportunities and maintaining personal networks Deliver NoPerformance ParameterMeasure1ProcessNo. of cases resolved per day, compliance to process and quality standards, meeting process level SLAs, Pulse score, Customer feedback, NSAT/ ESAT2Team ManagementProductivity, efficiency, absenteeism3Capability developmentTriages completed, Technical Test performance Mandatory Skills: Hadoop. Experience5-8 Years.

Posted 3 weeks ago

Apply

2.0 - 6.0 years

14 - 18 Lacs

Bengaluru

Work from Office

Naukri logo

Job Area: Engineering Group, Engineering Group > Software Applications Engineering General Summary: TitleSenior Staff Job functionSenior Staff Multimedia - Customer Engineering Skills/experience Extensive experience in design and development in Multimedia domains like Display, Graphics, VideoKnowledge of Automotive Display technologies/pipeline in Linux DRM/KMS framework or QNX, Graphics engines and Video technologies Experience in Linux kernel, device drivers Experience in Automotive infotainment or digital cluster platform development, system knowledge of Automotive architecture and products Working experience with Android, QNX and hypervisor-based platforms Working knowledge of any of the languages C,C++,Java Working knowledge of debug tools related to memory, gdb, coredump, JTAG Good Experience in Customer engagement and management. Responsibilities Provide firsthand support to Qualcomm IVI and/or ADAS customers Support issues reported by customer in lab, drive test and certifications Perform root cause analysis of customer issues alongside the internal technology teams and provide feedback to engineering team Domain related experience in any or all key Multimedia technologies ( display, graphics, video) Software delivery managementIdentifying, verifying and delivering fixes for the software failures Engaging with Customer directly for Failure reports, New feature requirements, New project requirements, Schedule management. Triaging, Debugging software failures reported by Customers on Android, Auto grade Linux, QNX software stack Documenting the Customer Issues, Key features-requirements, design data. Working closely with internal technology teams to support the fix process Supporting Qualcomm customers when required to resolve launch-gating issues. Education requirements RequiredBachelor's, Computer Engineering and/or Computer Science and/or Electronics Engineering. PreferredMaster's, Computer Engineering and/or Computer Science and/or Electronics Engineering. Minimum Qualifications: Bachelor's degree in Engineering, Information Systems, Computer Science, or related field and 6+ years of Software Applications Engineering, Software Development experience, or related work experience. OR Master's degree in Engineering, Information Systems, Computer Science, or related field and 5+ years of Software Applications Engineering, Software Development experience, or related work experience. OR PhD in Engineering, Information Systems, Computer Science, or related field and 4+ years of Software Applications Engineering, Software Development experience, or related work experience. 3+ years of experience with Programming Language such as C, C++, Java, Python, etc. 3+ years of experience with debugging techniques. Applicants Qualcomm is an equal opportunity employer. If you are an individual with a disability and need an accommodation during the application/hiring process, rest assured that Qualcomm is committed to providing an accessible process. You may e-mail disability-accomodations@qualcomm.com or call Qualcomm's toll-free number found here. Upon request, Qualcomm will provide reasonable accommodations to support individuals with disabilities to be able participate in the hiring process. Qualcomm is also committed to making our workplace accessible for individuals with disabilities. (Keep in mind that this email address is used to provide reasonable accommodations for individuals with disabilities. We will not respond here to requests for updates on applications or resume inquiries). Qualcomm expects its employees to abide by all applicable policies and procedures, including but not limited to security and other requirements regarding protection of Company confidential information and other confidential and/or proprietary information, to the extent those requirements are permissible under applicable law. To all Staffing and Recruiting Agencies Please do not forward resumes to our jobs alias, Qualcomm employees or any other company location. Qualcomm is not responsible for any fees related to unsolicited resumes/applications. If you would like more information about this role, please contact Qualcomm Careers.

Posted 3 weeks ago

Apply

5.0 - 10.0 years

3 - 7 Lacs

Bengaluru

Work from Office

Naukri logo

Job Title:DEVOPS- AWS Glue, KMS, ALB , ECS and Terraform/Terragrunt Experience5-10Years Location:Bangalore : DEVOPS, AWS, Glue, KMS, ALB , ECS, Terraform, Terragrunt

Posted 3 weeks ago

Apply

5.0 - 10.0 years

1 - 5 Lacs

Bengaluru

Work from Office

Naukri logo

Job Title:AWS Data Engineer Experience5-10 Years Location:Bangalore : Technical Skills: 5 + Years of experience as AWS Data Engineer, AWS S3, Glue Catalog, Glue Crawler, Glue ETL, Athena write Glue ETLs to convert data in AWS RDS for SQL Server and Oracle DB to Parquet format in S3 Execute Glue crawlers to catalog S3 files. Create catalog of S3 files for easier querying Create SQL queries in Athena Define data lifecycle management for S3 files Strong experience in developing, debugging, and optimizing Glue ETL jobs using PySpark or Glue Studio. Ability to connect Glue ETLs with AWS RDS (SQL Server and Oracle) for data extraction and write transformed data into Parquet format in S3. Proficiency in setting up and managing Glue Crawlers to catalog data in S3. Deep understanding of S3 architecture and best practices for storing large datasets. Experience in partitioning and organizing data for efficient querying in S3. Knowledge of Parquet file format advantages for optimized storage and querying. Expertise in creating and managing the AWS Glue Data Catalog to enable structured and schema-aware querying of data in S3. Experience with Amazon Athena for writing complex SQL queries and optimizing query performance. Familiarity with creating views or transformations in Athena for business use cases. Knowledge of securing data in S3 using IAM policies, S3 bucket policies, and KMS encryption. Understanding of regulatory requirements (e.g., GDPR) and implementing secure data handling practices. Non-Technical Skills: Candidate needs to be Good Team Player Effective interpersonal, team building and communication skills. Ability to communicate complex technology to no tech audience in simple and precise manner.

Posted 3 weeks ago

Apply

4.0 - 6.0 years

2 - 6 Lacs

Hyderabad, Pune, Gurugram

Work from Office

Naukri logo

Job Title:Sr AWS Data Engineer Experience4-6 Years Location:Pune, Hyderabad, Gurgaon, Bangalore [Hybrid] : PySpark, Python, SQL, AWS Services - S3, Athena, Glue, EMR/Spark, Redshift, Lambda, Step Functions, IAM, CloudWatch.

Posted 3 weeks ago

Apply

6.0 - 10.0 years

3 - 6 Lacs

Pune

Work from Office

Naukri logo

Job Title:Senior Data Engineer Experience6-10 Years Location:Pune Senior Data Engineer

Posted 3 weeks ago

Apply

6.0 - 8.0 years

1 - 4 Lacs

Chennai

Work from Office

Naukri logo

Job Title:Snowflake Developer Experience6-8 Years Location:Chennai - Hybrid : 3+ years of experience as a Snowflake Developer or Data Engineer. Strong knowledge of SQL, SnowSQL, and Snowflake schema design. Experience with ETL tools and data pipeline automation. Basic understanding of US healthcare data (claims, eligibility, providers, payers). Experience working with largescale datasets and cloud platforms (AWS, Azure,GCP). Familiarity with data governance, security, and compliance (HIPAA, HITECH).

Posted 3 weeks ago

Apply

10.0 - 12.0 years

9 - 13 Lacs

Chennai

Work from Office

Naukri logo

Job Title Data Architect Experience 10-12 Years Location Chennai : 10-12 years experience as Data Architect Strong expertise in streaming data technologies like Apache Kafka, Flink, Spark Streaming, or Kinesis. ProficiencyinprogramminglanguagessuchasPython,Java,Scala,orGo ExperiencewithbigdatatoolslikeHadoop,Hive,anddatawarehousessuchas Snowflake,Redshift,Databricks,MicrosoftFabric. Proficiencyindatabasetechnologies(SQL,NoSQL,PostgreSQL,MongoDB,DynamoDB,YugabyteDB). Should be flexible to work as anIndividual contributor

Posted 3 weeks ago

Apply

3.0 - 6.0 years

9 - 14 Lacs

Mumbai

Work from Office

Naukri logo

Role Overview : We are looking for aTalend Data Catalog Specialistto drive enterprise data governance initiatives by implementingTalend Data Catalogand integrating it withApache Atlasfor unified metadata management within a Cloudera-based data lakehouse. The role involves establishing metadata lineage, glossary harmonization, and governance policies to enhance trust, discovery, and compliance across the data ecosystem Key Responsibilities: o Set up and configure Talend Data Catalog to ingest and manage metadata from source systems, data lake (HDFS), Iceberg tables, Hive metastore, and external data sources. o Develop and maintain business glossaries , data classifications, and metadata models. o Design and implement bi-directional integration between Talend Data Catalog and Apache Atlas to enable metadata synchronization , lineage capture, and policy alignment across the Cloudera stack. o Map technical metadata from Hive/Impala to business metadata defined in Talend. o Capture end-to-end lineage of data pipelines (e.g., from ingestion in PySpark to consumption in BI tools) using Talend and Atlas. o Provide impact analysis for schema changes, data transformations, and governance rule enforcement. o Support definition and rollout of enterprise data governance policies (e.g., ownership, stewardship, access control). o Enable role-based metadata access , tagging, and data sensitivity classification. o Work with data owners, stewards, and architects to ensure data assets are well-documented, governed, and discoverable. o Provide training to users on leveraging the catalog for search, understanding, and reuse. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise 6–12 years in data governance or metadata management, with at least 2–3 years in Talend Data Catalog. Talend Data Catalog, Apache Atlas, Cloudera CDP, Hive/Impala, Spark, HDFS, SQL. Business glossary, metadata enrichment, lineage tracking, stewardship workflows. Hands-on experience in Talend–Atlas integration , either through REST APIs, Kafka hooks, or metadata bridges. Preferred technical and professional experience .

Posted 3 weeks ago

Apply

3.0 - 7.0 years

6 - 10 Lacs

Mumbai

Work from Office

Naukri logo

Role Overview : Looking for a Kafka SME to design and support real-time data ingestion pipelines using Kafka within a Cloudera-based Lakehouse architecture. Key Responsibilities : Design Kafka topics, partitions, schema registry Implement producer-consumer apps using Spark Structured Streaming Set up Kafka Connect, monitoring, and alerts Ensure secure, scalable message delivery Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Skills Required : Deep understanding of Kafka internals and ecosystem Integration with Cloudera and NiFi Schema evolution and serialization (Avro, Parquet) Performance tuning and fault-tolerance Preferred technical and professional experience Good communication skill. India market experience is preferred.

Posted 3 weeks ago

Apply

6.0 - 11.0 years

18 - 22 Lacs

Pune

Work from Office

Naukri logo

The Data Scientist Generative AI & NLP Specialist will be responsible for designing, developing, and deploying AI models and solutions that meet our business needs. With 4+ years of hands-on Data Science experience and at least 2+ years working in Generative AI, you will bring specialized expertise in LLMs and NLP. Project experience in NLP is a must, and experience in developing AI Agents will be considered a strong plus. This role suits a creative, analytical, and proactive individual focused on pushing the capabilities of AI within our projects. Primary Skill Develop and implement AI models focused on NLP taskssuch as text classification, entity recognition, sentiment analysis, and language generation. Leverage deep knowledge of Large Language Models (LLMs) to design, fine-tune, and deploy high-impact solutions across various business domains. Collaborate with cross-functional teams (data engineers, product managers, and domain experts) to define problem statements, build robust data pipelines, and integrate models into production systems. Stay current with advancements in Generative AI and NLP; research and evaluate new methodologies to drive innovation and maintain a competitive edge. Build, test, and optimize AI agents for automated tasks and enhanced user experiences where applicable. Secondary Skill

Posted 3 weeks ago

Apply

6.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

About Persistent We are an AI-led, platform-driven Digital Engineering and Enterprise Modernization partner, combining deep technical expertise and industry experience to help our clients anticipate what’s next. Our offerings and proven solutions create a unique competitive advantage for our clients by giving them the power to see beyond and rise above. We work with many industry-leading organizations across the world, including 12 of the 30 most innovative global companies, 60% of the largest banks in the US and India, and numerous innovators across the healthcare ecosystem. Our disruptor’s mindset, commitment to client success, and agility to thrive in the dynamic environment have enabled us to sustain our growth momentum by reporting $1,409.1M revenue in FY25, delivering 18.8% Y-o-Y growth. Our 23,900+ global team members, located in 19 countries, have been instrumental in helping the market leaders transform their industries. We are also pleased to share that Persistent won in four categories at the prestigious 2024 ISG Star of Excellence™ Awards , including the Overall Award based on the voice of the customer. We were included in the Dow Jones Sustainability World Index, setting high standards in sustainability and corporate responsibility. We were awarded for our state-of-the-art learning and development initiatives at the 16 th TISS LeapVault CLO Awards. In addition, we were cited as the fastest-growing IT services brand in the 2024 Brand Finance India 100 Report. Throughout our market-leading growth, we’ve maintained a strong employee satisfaction score of 8.2/10. At Persistent, we embrace diversity to unlock everyone's potential. Our programs empower our workforce by harnessing varied backgrounds for creative, innovative problem-solving. Our inclusive environment fosters belonging, encouraging employees to unleash their full potential. For more details please login to www.persistent.com About The Position We are looking for a Big Data Lead who will be responsible for the management of data sets that are too big for traditional database systems to handle. You will create, design, and implement data processing jobs in order to transform the data into a more usable format. You will also ensure that the data is secure and complies with industry standards to protect the company?s information. What You?ll Do Manage customer's priorities of projects and requests Assess customer needs utilizing a structured requirements process (gathering, analyzing, documenting, and managing changes) to prioritize immediate business needs and advising on options, risks and cost Design and implement software products (Big Data related) including data models and visualizations Demonstrate participation with the teams you work in Deliver good solutions against tight timescales Be pro-active, suggest new approaches and develop your capabilities Share what you are good at while learning from others to improve the team overall Show that you have a certain level of understanding for a number of technical skills, attitudes and behaviors Deliver great solutions Be focused on driving value back into the business Expertise You?ll Bring 6 years' experience in designing & developing enterprise application solution for distributed systems Understanding of Big Data Hadoop Ecosystem components (Sqoop, Hive, Pig, Flume) Additional experience working with Hadoop, HDFS, cluster management Hive, Pig and MapReduce, and Hadoop ecosystem framework HBase, Talend, NoSQL databases Apache Spark or other streaming Big Data processing, preferred Java or Big Data technologies, will be a plus Benefits Competitive salary and benefits package Culture focused on talent development with quarterly promotion cycles and company-sponsored higher education and certifications Opportunity to work with cutting-edge technologies Employee engagement initiatives such as project parties, flexible work hours, and Long Service awards Annual health check-ups Insurance coverage: group term life, personal accident, and Mediclaim hospitalization for self, spouse, two children, and parents Persistent Ltd. is dedicated to fostering diversity and inclusion in the workplace. We invite applications from all qualified individuals, including those with disabilities, and regardless of gender or gender preference. We welcome diverse candidates from all backgrounds. Inclusive Environment We offer hybrid work options and flexible working hours to accommodate various needs and preferences. Our office is equipped with accessible facilities, including adjustable workstations, ergonomic chairs, and assistive technologies to support employees with physical disabilities. If you are a person with disabilities and have specific requirements, please inform us during the application process or at any time during your employment. We are committed to creating an inclusive environment where all employees can thrive. Our company fosters a values-driven and people-centric work environment that enables our employees to: Accelerate growth, both professionally and personally Impact the world in powerful, positive ways, using the latest technologies Enjoy collaborative innovation, with diversity and work-life wellbeing at the core Unlock global opportunities to work and learn with the industry’s best Let’s unleash your full potential at Persistent - persistent.com/careers Show more Show less

Posted 3 weeks ago

Apply

3.0 years

0 Lacs

Delhi, Delhi

On-site

Indeed logo

Job Description: Hadoop & ETL Developer Job Summary We are looking for a Hadoop & ETL Developer with strong expertise in big data processing, ETL pipelines, and workflow automation. The ideal candidate will have hands-on experience in the Hadoop ecosystem, including HDFS, MapReduce, Hive, Spark, HBase, and PySpark, as well as expertise in real-time data streaming and workflow orchestration. This role requires proficiency in designing and optimizing large-scale data pipelines to support enterprise data processing needs. Key Responsibilities Design, develop, and optimize ETL pipelines leveraging Hadoop ecosystem technologies. Work extensively with HDFS, MapReduce, Hive, Sqoop, Spark, HBase, and PySpark for data processing and transformation. Implement real-time and batch data ingestion using Apache NiFi, Kafka, and Airbyte. • Develop and manage workflow orchestration using Apache Airflow. Perform data integration across structured and unstructured data sources, including MongoDB and Hadoop-based storage. Optimize MapReduce and Spark jobs for performance, scalability, and efficiency. Ensure data quality, governance, and consistency across the pipeline. Collaborate with data engineering teams to build scalable and high-performance data solutions. Monitor, debug, and enhance big data workflows to improve reliability and efficiency. Required Skills & Experience 3+ years of experience in Hadoop ecosystem (HDFS, MapReduce, Hive, Sqoop, Spark, HBase, PySpark). Strong expertise in ETL processes, data transformation, and data warehousing. Hands-on experience with Apache NiFi, Kafka, Airflow, and Airbyte. Proficiency in SQL and handling structured and unstructured data. Experience with NoSQL databases like MongoDB. Strong programming skills in Python or Scala for scripting and automation. Experience in optimizing Spark and MapReduce jobs for high-performance computing. Good understanding of data lake architectures and big data best practices. Preferred Qualifications Experience in real-time data streaming and processing. Familiarity with Docker/Kubernetes for deployment and orchestration. Strong analytical and problem-solving skills with the ability to debug and optimize data workflows. If you have a passion for big data, ETL, and large-scale data processing, we’d love to hear from you! Job Types: Full-time, Contractual / Temporary Pay: ₹400,000.00 - ₹1,100,000.00 per year Schedule: Day shift Monday to Friday Morning shift Application Question(s): How many years of experience do you have in Big Data ETL? How many years of experience do you have in Hadoop? Are you willing to work on contractual basis of job ? Are you comfortable on 3rd party payroll? Are you from Delhi? What is the notice period in your current company? Work Location: In person

Posted 3 weeks ago

Apply

6.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

About Persistent We are an AI-led, platform-driven Digital Engineering and Enterprise Modernization partner, combining deep technical expertise and industry experience to help our clients anticipate what’s next. Our offerings and proven solutions create a unique competitive advantage for our clients by giving them the power to see beyond and rise above. We work with many industry-leading organizations across the world, including 12 of the 30 most innovative global companies, 60% of the largest banks in the US and India, and numerous innovators across the healthcare ecosystem. Our disruptor’s mindset, commitment to client success, and agility to thrive in the dynamic environment have enabled us to sustain our growth momentum by reporting $1,409.1M revenue in FY25, delivering 18.8% Y-o-Y growth. Our 23,900+ global team members, located in 19 countries, have been instrumental in helping the market leaders transform their industries. We are also pleased to share that Persistent won in four categories at the prestigious 2024 ISG Star of Excellence™ Awards , including the Overall Award based on the voice of the customer. We were included in the Dow Jones Sustainability World Index, setting high standards in sustainability and corporate responsibility. We were awarded for our state-of-the-art learning and development initiatives at the 16 th TISS LeapVault CLO Awards. In addition, we were cited as the fastest-growing IT services brand in the 2024 Brand Finance India 100 Report. Throughout our market-leading growth, we’ve maintained a strong employee satisfaction score of 8.2/10. At Persistent, we embrace diversity to unlock everyone's potential. Our programs empower our workforce by harnessing varied backgrounds for creative, innovative problem-solving. Our inclusive environment fosters belonging, encouraging employees to unleash their full potential. For more details please login to www.persistent.com About The Position We are looking for a Big Data Lead who will be responsible for the management of data sets that are too big for traditional database systems to handle. You will create, design, and implement data processing jobs in order to transform the data into a more usable format. You will also ensure that the data is secure and complies with industry standards to protect the company?s information. What You?ll Do Manage customer's priorities of projects and requests Assess customer needs utilizing a structured requirements process (gathering, analyzing, documenting, and managing changes) to prioritize immediate business needs and advising on options, risks and cost Design and implement software products (Big Data related) including data models and visualizations Demonstrate participation with the teams you work in Deliver good solutions against tight timescales Be pro-active, suggest new approaches and develop your capabilities Share what you are good at while learning from others to improve the team overall Show that you have a certain level of understanding for a number of technical skills, attitudes and behaviors Deliver great solutions Be focused on driving value back into the business Expertise You?ll Bring 6 years' experience in designing & developing enterprise application solution for distributed systems Understanding of Big Data Hadoop Ecosystem components (Sqoop, Hive, Pig, Flume) Additional experience working with Hadoop, HDFS, cluster management Hive, Pig and MapReduce, and Hadoop ecosystem framework HBase, Talend, NoSQL databases Apache Spark or other streaming Big Data processing, preferred Java or Big Data technologies, will be a plus Benefits Competitive salary and benefits package Culture focused on talent development with quarterly promotion cycles and company-sponsored higher education and certifications Opportunity to work with cutting-edge technologies Employee engagement initiatives such as project parties, flexible work hours, and Long Service awards Annual health check-ups Insurance coverage: group term life, personal accident, and Mediclaim hospitalization for self, spouse, two children, and parents Persistent Ltd. is dedicated to fostering diversity and inclusion in the workplace. We invite applications from all qualified individuals, including those with disabilities, and regardless of gender or gender preference. We welcome diverse candidates from all backgrounds. Inclusive Environment We offer hybrid work options and flexible working hours to accommodate various needs and preferences. Our office is equipped with accessible facilities, including adjustable workstations, ergonomic chairs, and assistive technologies to support employees with physical disabilities. If you are a person with disabilities and have specific requirements, please inform us during the application process or at any time during your employment. We are committed to creating an inclusive environment where all employees can thrive. Our company fosters a values-driven and people-centric work environment that enables our employees to: Accelerate growth, both professionally and personally Impact the world in powerful, positive ways, using the latest technologies Enjoy collaborative innovation, with diversity and work-life wellbeing at the core Unlock global opportunities to work and learn with the industry’s best Let’s unleash your full potential at Persistent - persistent.com/careers Show more Show less

Posted 3 weeks ago

Apply

3.0 - 5.0 years

5 - 7 Lacs

Mumbai, Delhi / NCR, Bengaluru

Work from Office

Naukri logo

We are seeking a skilled Big Data Developer with 3+ years of experience to develop, maintain, and optimize large-scale data pipelines using frameworks like Spark, PySpark, and Airflow. The role involves working with SQL, Impala, Hive, and PL/SQL for advanced data transformations and analytics, designing scalable data storage systems, and integrating structured and unstructured data using tools like Sqoop. The ideal candidate will collaborate with cross-functional teams to implement data warehousing strategies and leverage BI tools for insights. Proficiency in Python programming, workflow orchestration with Airflow, and Unix/Linux environments is essential. Locations : Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune, Remote

Posted 3 weeks ago

Apply

Exploring Sqoop Jobs in India

India has seen a rise in demand for professionals skilled in Sqoop, a tool designed for efficiently transferring bulk data between Apache Hadoop and structured datastores such as relational databases. Job seekers with expertise in Sqoop can explore various opportunities in the Indian job market.

Top Hiring Locations in India

  1. Bangalore
  2. Pune
  3. Hyderabad
  4. Chennai
  5. Mumbai

Average Salary Range

The average salary range for Sqoop professionals in India varies based on experience levels: - Entry-level: Rs. 3-5 lakhs per annum - Mid-level: Rs. 6-10 lakhs per annum - Experienced: Rs. 12-20 lakhs per annum

Career Path

Typically, a career in Sqoop progresses as follows: 1. Junior Developer 2. Sqoop Developer 3. Senior Developer 4. Tech Lead

Related Skills

In addition to expertise in Sqoop, professionals in this field are often expected to have knowledge of: - Apache Hadoop - SQL - Data warehousing concepts - ETL tools

Interview Questions

  • What is Sqoop and why is it used? (basic)
  • Explain the difference between Sqoop import and Sqoop export commands. (medium)
  • How can you perform incremental imports using Sqoop? (medium)
  • What are the limitations of Sqoop? (medium)
  • What is the purpose of the metastore in Sqoop? (advanced)
  • Explain the various options available in the Sqoop import command. (medium)
  • How can you schedule Sqoop jobs in a production environment? (advanced)
  • What is the role of the Sqoop connector in data transfer? (medium)
  • How does Sqoop handle data consistency during imports? (medium)
  • Can you use Sqoop with NoSQL databases? If yes, how? (advanced)
  • What are the different file formats supported by Sqoop for importing and exporting data? (basic)
  • Explain the concept of split-by column in Sqoop. (medium)
  • How can you import data directly into Hive using Sqoop? (medium)
  • What are the security considerations while using Sqoop? (advanced)
  • How can you improve the performance of Sqoop imports? (medium)
  • Explain the syntax of the Sqoop export command. (basic)
  • What is the significance of boundary queries in Sqoop? (medium)
  • How does Sqoop handle data serialization and deserialization? (medium)
  • What are the different authentication mechanisms supported by Sqoop? (advanced)
  • How can you troubleshoot common issues in Sqoop imports? (medium)
  • Explain the concept of direct mode in Sqoop. (medium)
  • What are the best practices for optimizing Sqoop performance? (advanced)
  • How does Sqoop handle data types mapping between Hadoop and relational databases? (medium)
  • What are the differences between Sqoop and Flume? (basic)
  • How can you import data from a mainframe into Hadoop using Sqoop? (advanced)

Closing Remark

As you explore job opportunities in the field of Sqoop in India, make sure to prepare thoroughly and showcase your skills confidently during interviews. Stay updated with the latest trends and advancements in Sqoop to enhance your career prospects. Good luck with your job search!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies