Jobs
Interviews

921 Sqoop Jobs - Page 15

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 10.0 years

22 - 27 Lacs

Chennai, Mumbai (All Areas)

Work from Office

Build ETL jobs using Fivetran and dbt for our internal projects and for customers that use various platforms like Azure, Salesforce and AWS technologies Build out data lineage artifacts to ensure all current and future systems are properly documented Required Candidate profile exp with a strong proficiency with SQL query/development skills Develop ETL routines that manipulate & transfer large volumes of data and perform quality checks Exp in healthcare industry with PHI/PII

Posted 1 month ago

Apply

4.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Greetings from TCS! TCS is hiring for MDM Support - Support Executive Desired Experience Range: 4 - 8 Years Job Location: Hyderabad Required Technical Skill Set :- IBM MDM, Java, Oracle SQL Database, Linux, Apigee and Openshift, Sqoop, hive, spark, oozie and Flume- Cloudera (CDH) Desired Competency: 3 years exp on Sqoop, hive, spark, oozie and Flume- Cloudera (CDH) Plan, execute and implement applications and configuration change procedures/requests Supervise all alerts related to application and system procedures and provide services proactively. Install and prepare tools required for proper functioning of applications on regular basis. 2+ years of experience working with SQL 2+ years of experience in working with GIT 2+ years of experience in development, maintenance, operations and support

Posted 1 month ago

Apply

4.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Major skillset: GCP , Pyspark , SQL , Python , Cloud Architecture , ETL , Automation 4+ years of Experience in Data Engineering, Management with strong focus on Spark for building production ready Data Pipelines. Experienced with analyzing large data sets from multiple data sources and build automated testing and validations. Knowledge of Hadoop eco-system and components like HDFS, Spark, Hive, Sqoop Strong Python experience Hands on SQL, HQL to write optimized queries Strong hands-on experience with GCP Big Query, Data Proc, Airflow DAG, Dataflow, GCS, Pub/sub, Secret Manager, Cloud Functions, Beams. Ability to work in fast passed collaborative environment work with various stakeholders to define strategic optimization initiatives. Deep understanding of distributed computing, memory turning and spark optimization. Familiar with CI/CD workflows, Git. Experience in designing modular, automated, and secure ETL frameworks.

Posted 1 month ago

Apply

4.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Major skillset: GCP , Pyspark , SQL , Python , Cloud Architecture , ETL , Automation 4+ years of Experience in Data Engineering, Management with strong focus on Spark for building production ready Data Pipelines. Experienced with analyzing large data sets from multiple data sources and build automated testing and validations. Knowledge of Hadoop eco-system and components like HDFS, Spark, Hive, Sqoop Strong Python experience Hands on SQL, HQL to write optimized queries Strong hands-on experience with GCP Big Query, Data Proc, Airflow DAG, Dataflow, GCS, Pub/sub, Secret Manager, Cloud Functions, Beams. Ability to work in fast passed collaborative environment work with various stakeholders to define strategic optimization initiatives. Deep understanding of distributed computing, memory turning and spark optimization. Familiar with CI/CD workflows, Git. Experience in designing modular, automated, and secure ETL frameworks.

Posted 1 month ago

Apply

3.0 - 7.0 years

9 - 14 Lacs

Hyderabad

Work from Office

HSBC electronic data processing india pvt ltd is looking for GCP Data Engineer/ Senior Consultant Specialist to join our dynamic team and embark on a rewarding career journey Liaising with coworkers and clients to elucidate the requirements for each task. Conceptualizing and generating infrastructure that allows big data to be accessed and analyzed. Reformulating existing frameworks to optimize their functioning. Testing such structures to ensure that they are fit for use. Preparing raw data for manipulation by data scientists. Detecting and correcting errors in your work. Ensuring that your work remains backed up and readily accessible to relevant coworkers. Remaining up-to-date with industry standards and technological advancements that will improve the quality of your outputs.

Posted 1 month ago

Apply

5.0 - 10.0 years

4 - 8 Lacs

Noida

Work from Office

We are looking for a skilled Data Software Engineer with 5 to 12 years of experience in Big Data and related technologies. The ideal candidate will have expertise in distributed computing principles, Apache Spark, and hands-on programming with Python. Roles and Responsibility Design and implement Big Data solutions using Apache Spark and other relevant technologies. Develop and maintain large-scale data processing systems, including stream-processing systems. Collaborate with cross-functional teams to integrate data from multiple sources, such as RDBMS, ERP, and files. Optimize performance of Spark jobs and troubleshoot issues. Lead a team efficiently and contribute to the development of Big Data solutions. Experience with native Cloud data services, such as AWS or AZURE Databricks. Job Expert-level understanding of distributed computing principles and Apache Spark. Hands-on programming experience with Python and proficiency with Hadoop v2, Map Reduce, HDFS, and Sqoop. Experience with building stream-processing systems using technologies like Apache Storm or Spark-Streaming. Good understanding of Big Data querying tools, such as Hive and Impala. Knowledge of ETL techniques and frameworks, along with experience with NoSQL databases like HBase, Cassandra, and MongoDB. Ability to work in an AGILE environment and lead a team efficiently. Strong understanding of SQL queries, joins, stored procedures, and relational schemas. Experience with integrating data from multiple sources, including RDBMS (SQL Server, Oracle), ERP, and files.

Posted 1 month ago

Apply

9.0 - 14.0 years

3 - 7 Lacs

Noida

Work from Office

We are looking for a skilled Data Engineer with 9 to 15 years of experience in the field. The ideal candidate will have expertise in designing and developing data pipelines using Confluent Kafka, ksqlDB, and Apache Flink. Roles and Responsibility Design and develop data pipelines for real-time and batch data ingestion and processing using Confluent Kafka, ksqlDB, and Apache Flink. Build and configure Kafka Connectors to ingest data from various sources, including databases, APIs, and message queues. Develop Flink applications for complex event processing, stream enrichment, and real-time analytics. Optimize ksqlDB queries for real-time data transformations, aggregations, and filtering. Implement data quality checks and monitoring to ensure data accuracy and reliability throughout the pipeline. Monitor and troubleshoot data pipeline performance, identifying bottlenecks and implementing optimizations. Job Bachelor's degree or higher from a reputed university. 8 to 10 years of total experience, with a majority related to ETL/ELT big data and Kafka. Proficiency in developing Flink applications for stream processing and real-time analytics. Strong understanding of data streaming concepts and architectures. Extensive experience with Confluent Kafka, including Kafka Brokers, Producers, Consumers, and Schema

Posted 1 month ago

Apply

3.0 - 6.0 years

7 - 11 Lacs

Bengaluru

Work from Office

We are looking for a skilled Data Engineer with 3 to 6 years of experience in processing data pipelines using Databricks, PySpark, and SQL on Cloud distributions like AWS. The ideal candidate should have hands-on experience with Databricks, Spark, SQL, and AWS Cloud platform, especially S3, EMR, Databricks, Cloudera, etc. Roles and Responsibility Design and develop large-scale data pipelines using Databricks, Spark, and SQL. Optimize data operations using Databricks and Python. Develop solutions to meet business needs reflecting a clear understanding of the objectives, practices, and procedures of the corporation, department, and business unit. Evaluate alternative risks and solutions before taking action. Utilize all available resources efficiently. Collaborate with cross-functional teams to achieve business goals. Job Experience working in projects involving data engineering and processing. Proficiency in large-scale data operations using Databricks and overall comfort with Python. Familiarity with AWS compute, storage, and IAM concepts. Experience with S3 Data Lake as the storage tier. ETL background with Talend or AWS Glue is a plus. Cloud Warehouse experience with Snowflake is a huge plus. Strong analytical and problem-solving skills. Relevant experience with ETL methods and retrieving data from dimensional data models and data warehouses. Strong experience with relational databases and data access methods, especially SQL. Excellent collaboration and cross-functional leadership skills. Excellent communication skills, both written and verbal. Ability to manage multiple initiatives and priorities in a fast-paced, collaborative environment. Ability to leverage data assets to respond to complex questions that require timely answers. Working knowledge of migrating relational and dimensional databases on AWS Cloud platform.

Posted 1 month ago

Apply

10.0 - 20.0 years

12 - 22 Lacs

Pune

Work from Office

Your key responsibilities Ensures that the Service Operations team provides optimum service level to the business lines it supports. Takes overall responsibility for the resolution of incidents and problems within the team. Oversees the resolution of complex incidents. Ensure that Analysts apply the right problem-solving techniques and processes. Assists in managing business stakeholder relationships. Assists in defining and managing OLAs with relevant stakeholders. Ensures that the team understands OLAs and resources appropriately and are aligned to business SLAs. Ensures relevant Client Service teams are informed of progress on incidents, where necessary. Ensures that defined divisional Production Management service operations and support processes are adhered to by the team. Make improvement recommendations where appropriate. Prepares for and, if requested, manages steam review meetings. Makes suggestions for continual service improvement. Manages escalations by working with Client Services and other Service Operations Specialists and relevant functions to accurately resolve escalated issues quickly. Observes areas requiring monitoring, reporting and improvement. Identifies required metrics and ensure they are established, monitored and improved where appropriate. Continuously seeks to improve team performance. Participates in team training events, where appropriate. Works with team members to identify areas of focus, where training may improve team performance, and improve incident resolution. Mentors and coaches Production Management Analysts within the team by providing career development and counselling, as needed. Assists Production Management Analysts in setting performance targets; and manages performance against them. Identifies team bottlenecks (obstacles) and takes appropriate actions to eliminate them. Level 3 or Advanced support for technical infrastructure components Evaluation of new products including prototyping and recommending new products including automation Specify/select tools to enhance operational support. Champion activities and establishes best practices in specialist area, working to implement best of breed test practices and processes in area of profession. Defines and implements best practices, solutions and standards related to their area of expertise Builds captures and manages the transfers of knowledge across the Service Operations organization Fulfil Service Requests addressed to L2 Support Communicate with Service Desk function, other L2 and L3 units Incident-, Change-, Problem Management and Service Request Fulfillment Solving incidents of customers in time Log file analysis and root cause analysis Participating in major incident calls for high priority incidents Resolving inconsistencies of data replication Supporting Problem management to solve Application issues Creating/Executing Service Requests for Customers, provide Reports and Statistics Escalating and informing about incidents in a timely manner Documentation of Tasks, Incidents, Problems and Changes Documentation in Service Now Documentation in Knowledgebases Improving monitoring of the application Adding requests for Monitoring Adding alerts and thresholds for occurring issues Implementing automation of tasks Your skills and experience Service Operations Specialist experience within a global operations context Extensive experience of supporting complex application and infrastructure domains Experience managing and mentoring Service Operations teams Broad ITIL/best practice service context within a real time distributed environment Experience managing relationships across multiple disciplines and time zones Ability to converse clearly with internal and external staff via telephone and written communication Good knowledge on interface technologies and communication protocols Be willing to work in DE business hours Clear and concise documentation in general and especially a proper documentation of the current status of incidents, problems and service requests in the Service Management tool Thorough and precise work style with a focus on high quality Distinct service orientation High degree of self-initiative Bachelors Degree from an accredited college or university with a concentration in IT or Computer Science related discipline (equivalent diploma or technical faculty) ITIL certification and experience with ITSM tool ServiceNow (preferred) Know How on Banking domain and preferably regulatory topics around know your customer processes Experience with databases like BigQuery and good understanding of Big Data and GCP technologies Experience in at least: GitHub, Terraform, Cloud SQL, Cloud Storage, Dataproc, Dataflow Architectural skills for big data solutions, especially interface architecture You can work very well in teams but also independent and you are constructive and target oriented Your English skills are very good and you can both communicate professionally but also informally in small talks with the team Area specific tasks / responsibilities: Handling Incident- /Problem Management und Service Request Fulfilment Analyze Incidents, which are addressed from 1st Level Support Analyze occurred errors out of the batch processing and interfaces of related systems Resolution or Workaround determination and implementation Supporting the resolution of high impact incidents on our services, including attendance at incident bridge calls Escalate incident tickets and working with members of the team and Developers Handling Service Request eg. Reports for Business and Projects Providing resolution for open problems, or ensuring that the appropriate parties have been tasked with doing so Supporting the handover from new Projects / Applications into Production Services with Service Transition before Go Life Phase Supporting Oncall-Support activities

Posted 1 month ago

Apply

5.0 - 10.0 years

8 - 12 Lacs

Chennai

Work from Office

We are looking for a skilled Senior Specialist - Data Engineering with 5 to 10 years of experience to join our team at Apptad Technologies Pvt Ltd. The ideal candidate will have a strong background in data engineering and excellent technical skills. Roles and Responsibility Design, develop, and implement large-scale data pipelines and architectures. Collaborate with cross-functional teams to identify and prioritize project requirements. Develop and maintain complex data systems and databases. Ensure data quality, integrity, and security. Optimize data processing workflows for improved performance and efficiency. Stay updated with industry trends and emerging technologies. Job Strong understanding of data engineering principles and practices. Experience with big data technologies such as Hadoop, Spark, and NoSQL databases. Excellent programming skills in languages like Java, Python, or Scala. Strong problem-solving skills and attention to detail. Ability to work collaboratively in a team environment. Effective communication and interpersonal skills. For more information, please contact us at 6566536.

Posted 1 month ago

Apply

6.0 - 8.0 years

3 - 7 Lacs

Noida

Work from Office

company name=Apptad Technologies Pvt Ltd., industry=Employment Firms/Recruitment Services Firms, experience=6 to 8 , jd= Job Title:- Data Engineer Job Location:- Remote Job Type:- full time :- Apptad is looking for a Data Engineer profile. Its a full-time/long term job opportunity with us. Candidate should have Advance Python and Advance SQL and Pyspark. Python Programming Language: LevelAdvanced Key ConceptsMulti-threading, Multi-Processing, Regular Expressions, Exception Handling, etc. LibrariesPandas, Numpyetc. Data Modelling and Data Transformation: LevelAdvanced Key AreasData processing on structured and unstructured data. Relational Databases: LevelAdvanced Key AreasQuery Optimization, Query Building, Experience with ORMs like SQLAlchemy, Exposure to databases such as MSSQL, Postgres, Oracle, etc. Functional and Object-Oriented Programming (OOPS): LevelIntermediate Problem Solving for Feature Development: LevelIntermediate Good experience working with AWS Cloud and its services related to Data engineering like Athena, AWS batch jobs etc. , Title=Data Engineer, ref=6566581

Posted 1 month ago

Apply

4.0 - 6.0 years

3 - 7 Lacs

Noida

Work from Office

company name=Apptad Technologies Pvt Ltd., industry=Employment Firms/Recruitment Services Firms, experience=4 to 6 , jd= Experience with AWS Python AWS CloudFormation Step Functions Glue Lambda S3 SNS SQS IAM Athena EventBridge and API Gateway Experience in Python development Expertise in multiple applications and functionalities Domain skills with a quick learning inclination Good SQL knowledge and understanding of databases Familiarity with MS Office and SharePoint High aptitude and excellent problem solving skills Strong analytical skills Interpersonal skills and ability to influence stakeholders , Title=Python Developer, ref=6566420

Posted 1 month ago

Apply

6.0 - 11.0 years

8 - 14 Lacs

Hyderabad, Pune

Work from Office

The Data Scientist Generative AI & NLP Specialist will be responsible for designing, developing, and deploying AI models and solutions that meet our business needs. With 4+ years of hands-on Data Science experience and at least 2+ years working in Generative AI, you will bring specialized expertise in LLMs and NLP. Project experience in NLP is a must, and experience in developing AI Agents will be considered a strong plus. This role suits a creative, analytical, and proactive individual focused on pushing the capabilities of AI within our projects. Primary Skill Develop and implement AI models focused on NLP taskssuch as text classification, entity recognition, sentiment analysis, and language generation. Leverage deep knowledge of Large Language Models (LLMs) to design, fine-tune, and deploy high-impact solutions across various business domains. Collaborate with cross-functional teams (data engineers, product managers, and domain experts) to define problem statements, build robust data pipelines, and integrate models into production systems. Stay current with advancements in Generative AI and NLP; research and evaluate new methodologies to drive innovation and maintain a competitive edge. Build, test, and optimize AI agents for automated tasks and enhanced user experiences where applicable.

Posted 1 month ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Introduction A career in IBM Consulting is rooted by long-term relationships and close collaboration with clients across the globe. You'll work with visionaries across multiple industries to improve the hybrid cloud and AI journey for the most innovative and valuable companies in the world. Your ability to accelerate impact and make meaningful change for your clients is enabled by our strategic partner ecosystem and our robust technology platforms across the IBM portfolio; including Software and Red Hat. Curiosity and a constant quest for knowledge serve as the foundation to success in IBM Consulting. In your role, you'll be encouraged to challenge the norm, investigate ideas outside of your role, and come up with creative solutions resulting in ground breaking impact for a wide network of clients. Our culture of evolution and empathy centers on long-term career growth and development opportunities in an environment that embraces your unique skills and experience Your Role And Responsibilities As an Associate Software Developer at IBM, you'll work with clients to co-create solutions to major real-world challenges by using best practice technologies, tools, techniques, and products to translate system requirements into the design and development of customized systems Preferred Education Master's Degree Required Technical And Professional Expertise Core Java, Spring Boot, Java2/EE, Microsservices - Hadoop Ecosystem (HBase, Hive, MapReduce, HDFS, Pig, Sqoop etc) Spark Good to have Python Preferred Technical And Professional Experience None

Posted 1 month ago

Apply

162.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Area(s) of responsibility About Birlasoft Birlasoft, a powerhouse where domain expertise, enterprise solutions, and digital technologies converge to redefine business processes. We take pride in our consultative and design thinking approach, driving societal progress by enabling our customers to run businesses with unmatched efficiency and innovation. As part of the CKA Birla Group, a multibillion-dollar enterprise, we boast a 12,500+ professional team committed to upholding the Group's 162-year legacy. Our core values prioritize Diversity, Equity, and Inclusion (DEI) initiatives, along with Corporate Sustainable Responsibility (CSR) activities, demonstrating our dedication to building inclusive and sustainable communities. Join us in shaping a future where technology seamlessly aligns with purpose. About the Job:- Develop and maintain automated test scripts using UFT (Unified Functional Testing) to ensure the quality of web, desktop, and enterprise applications. Job Title: : Kafka Sr Developer Experience Required – 9+ Relevant Years Notice Period – Immediate joiner only Education – Engineering, or a related field. Location - Hyderabad only JD Python | MS SQL | Java | Azure Databricks | Spark | Kenisis | Kafka | Sqoop | Hive | Apache NiFi | Unix Shell Scripting The person should be able to work with business team, understand the requirement, work on the design & development (hands-on), support testing, go-live & hypercare phases. Person should also act a s a mentor and guide the offshore Medronic Kafka developer where he/she can review the work and take the ownership of the deliverables.

Posted 1 month ago

Apply

6.0 years

0 Lacs

India

On-site

Company Description DevoTrend IT is a global technology solutions provider leading the digitalization of private and public sectors. We deliver end-to-end digital transformation solutions and services, from ideation to deployment. Our offerings include IT & Software Consultancy Services, Resources Outsourcing Services, and Digital Transformation Consultancy, all aimed at driving innovative and productive experiences for our customers. With expertise in cloud, analytics, mobility, and various CRM/ERP platforms, we provide impactful and maintainable software solutions. Role Description This is a full-time hybrid role for a Snowflake Data Engineer and the locations are Pune, Mumbai, Chennai and Bangalore. The Snowflake Data Engineer will be responsible for designing, implementing, and managing data warehousing solutions on the Snowflake platform. Day-to-day tasks will include data modeling, building and managing ETL processes, and performing data analytics. The role requires close collaboration with cross-functional teams to ensure data integrity and optimal performance of the data infrastructure. Qualifications Build ETL (extract, transform, and loading) jobs using Fivetran and dbt for our internal projects and for customers that use various platforms like Azure, Salesforce, and AWS technologies • Monitoring active ETL jobs in production. • Build out data lineage artifacts to ensure all current and future systems are properly documented • Assist with the build out design/mapping documentation to ensure development is clear and testable for QA and UAT purposes • Assess current and future data transformation needs to recommend, develop, and train new data integration tool technologies • Discover efficiencies with shared data processes and batch schedules to help ensure no redundancy and smooth operations • Assist the Data Quality Analyst to implement checks and balances across all jobs to ensure data quality throughout the entire environment for current and future batch jobs. • Hands-on experience in developing and implementing large-scale data warehouses, Business Intelligence and MDM solutions, including Data Lakes/Data Vaults. SUPERVISORY RESPONSIBILITIES: • This job has no supervisory responsibilities. QUALIFICATIONS: • Bachelor's Degree in Computer Science, Math, Software Engineering, Computer Engineering, or related field AND 6+ years experience in business analytics, data science, software development, data modeling or data engineering work • 3-5 year’s experience with a strong proficiency with SQL query/development skills • Develop ETL routines that manipulate and transfer large volumes of data and perform quality checks • Hands-on experience with ETL tools (e.g Informatica, Talend, dbt, Azure Data Factory) • Experience working in the healthcare industry with PHI/PII • Creative, lateral, and critical thinker • Excellent communicator • Well-developed interpersonal skills • Good at prioritizing tasks and time management • Ability to describe, create and implement new solutions • Experience with related or complementary open source software platforms and languages (e.g. Java, Linux, Apache, Perl/Python/PHP, Chef) • Knowledge / Hands-on experience with BI tools and reporting software (e.g. Cognos, Power BI, Tableau) • Big Data stack (e.g.Snowflake(Snowpark), SPARK, MapReduce, Hadoop, Sqoop, Pig, HBase, Hive, Flume)

Posted 1 month ago

Apply

5.0 - 12.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Dear Candidate! Greetings from TCS !!! Role: Redhat Linux Administrator Location: Bangalore/Chennai/Hyderabad/Mumbai/Indore Experience Range: 5 to 12 Years Job Description: Experience in services like HDFS, Sqoop, NiFi, Hive, HBase and Linux shell scripting. Experience in security related components like Kerberos, Ranger. Obtain and analyze business requirements and document technical solutions• Leadership skills in technical initiatives• Generating detailed technical documentation• Communications that clearly articulate solutions and the ability to perform demonstrations TCS has been a great pioneer in feeding the fire of Young Techies like you. We are a global leader in the technology arena and there's nothing that can stop us from growing together.

Posted 1 month ago

Apply

8.0 - 13.0 years

30 - 35 Lacs

Bengaluru

Work from Office

Collaborate with key stakeholders to understand the specific data engineering requirements and objectives of the organization. Take ownership of the data engineering process and work closely with the team to ensure the successful implementation and maintenance of data pipelines and structures for smart plant initiatives. Align the data fabric for our initial Pilot facilities, working in coordination with our business and vendor counterparts to effectively meet the data needs for all initiatives. Develop a comprehensive roadmap outlining the data points necessary to support various initiatives, such as digital twin, predictive capabilities, and reporting. Assist in the creation of a roll-out plan for additional facilities, including scaling documents, updating packages for our data fabric technologies, and refining data structures. Collaborate with other team members and external resources, to delegate and oversee tasks where necessary to speed up development. Stay up to date with the latest industry trends and best practices in data engineering, recommending and implementing improvements as appropriate. Implement AI

Posted 1 month ago

Apply

4.0 - 9.0 years

9 - 13 Lacs

Kolkata, Mumbai, New Delhi

Work from Office

Krazy Mantra Group of Companies is looking for Big Data Engineer to join our dynamic team and embark on a rewarding career journeyDesigning and implementing scalable data storage solutions, such as Hadoop and NoSQL databases.Developing and maintaining big data processing pipelines using tools such as Apache Spark and Apache Storm.Writing and testing data processing scripts using languages such as Python and Scala.Integrating big data solutions with other IT systems and data sources.Collaborating with data scientists and business stakeholders to understand data requirements and identify opportunities for data-driven decision making.Ensuring the security and privacy of sensitive data.Monitoring performance and optimizing big data systems to ensure they meet performance and availability requirements.Staying up-to-date with emerging technologies and trends in big data and data engineering.Mentoring junior team members and providing technical guidance as needed.Documenting and communicating technical designs, solutions, and best practices.Strong problem-solving and debugging skillsExcellent written and verbal communication skills

Posted 1 month ago

Apply

10.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

We are seeking a Spark, Big Data - ETL Tech Lead for Commercial Card’s Global Data Repository development team. The successful candidate will interact with the Development Project Manager, the development, testing, and production support teams, as well as other departments within Citigroup (such as the System Administrators, Database Administrators, Data Centre Operations, and Change Control groups) for TTS platforms. He/she requires exceptional communication skills across both technology and the business and will have a high degree of visibility. The candidate will be a rigorous technical lead with a strong understanding of how to build scalable, enterprise level global applications. The ideal candidate will be dependable and resourceful software professional who can comfortably work in a large development team in a globally distributed, dynamic work environment that fosters diversity, teamwork and collaboration. The ability to work in high pressured environment is essential. Responsibilities: Lead the design and implementation of large-scale data processing pipelines using Apache Spark on BigData Hadoop Platform. Develop and optimize Spark applications for performance and scalability. Responsible for providing technical leadership of multiple large scale/complex global software solutions. Integrate data from various sources, including Couchbase, Snowflake, and HBase, ensuring data quality and consistency. Experience of developing teams of permanent employees and vendors from 5 – 15 developers in size Build and sustain strong relationships with the senior business leaders associated with the platform Design, code, test, document and implement application release projects as part of development team. Work with onsite development partners to ensure design and coding best practices. Work closely with Program Management and Quality Control teams to deliver quality software to agreed project schedules. Proactively notify Development Project Manager of risks, bottlenecks, problems, issues, and concerns. Compliance with Citi's System Development Lifecycle and Information Security requirements. Oversee development scope, budgets, time line documents Monitor, update and communicate project timelines and milestones; obtain senior management feedback; understand potential speed bumps and client’s true concerns/needs. Stay updated with the latest trends and technologies in big data and cloud computing. Mentor and guide junior developers, providing technical leadership and expertise. Key Challenges: Managing time and changing priorities in a dynamic environment Ability to provide quick turnaround to software issues and management requests Ability to assimilate key issues and concepts and come up to speed quickly Qualifications: Bachelor’s or master’s degree in computer science, Information Technology, or equivalent Minimum 10 years of Proven experience in developing and managing big data solutions using Apache Spark. Having strong hold on Spark-core, Spark-SQL & Spark Streaming Minimum 6 years of experience in leading globally distributed teams successfully. Strong programming skills in Scala, Java, or Python. Hands on experience on Technologies like Apache Hive, Apache Kafka, HBase, Couchbase, Sqoop, Flume etc. Proficiency in SQL and experience with relational (Oracle/PL-SQL) and NoSQL databases like mongoDB. Demonstrated people and technical management skills. Demonstrated excellent software development skills. Strong experiences in implementation of complex file transformations like positional, xmls. Experience in building enterprise system with focus on recovery, stability, reliability, scalability and performance. Experience in working on Kafka, JMS / MQ applications. Experience in working multiple OS (Unix, Linux, Win) Familiarity with data warehousing concepts and ETL processes. Experience in performance tuning of large technical solutions with significant volumes Knowledge of data modeling, data architecture, and data integration techniques. Knowledge on best practices for data security, privacy, and compliance. Key Competencies: Excellent organization skills, attention to detail, and ability to multi-task Demonstrated sense of responsibility and capability to deliver quickly Excellent communication skills. Clearly articulating and documenting technical and functional specifications is a key requirement. Proactive problem-solver Relationship builder and team player Negotiation, difficult conversation management and prioritization skills Flexibility to handle multiple complex projects and changing priorities Excellent verbal, written and interpersonal communication skills Good analytical and business skills Promotes teamwork and builds strong relationships within and across global teams Promotes continuous process improvement especially in code quality, testability & reliability Desirable Skills: Experience in Java, Spring, ETL Tools like Talend, Ab Initio is a plus. Experience of migrating functionality from ETL tools to Spark. Experience/knowledge on Cloud technologies AWS, GCP. Experience in Financial industry ETL Certification, Project Management Certification Experience with Commercial Cards applications and processes would be advantageous Experience with Agile methodology This job description provides a high-level review of the types of work performed. Other job-related duties may be assigned as required. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Applications Development ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Most Relevant Skills Please see the requirements listed above. ------------------------------------------------------ Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster.

Posted 1 month ago

Apply

3.0 - 8.0 years

11 - 16 Lacs

Pune

Work from Office

Job Title Lead Engineer Location Pune Corporate Title Director As a lead engineer within the Transaction Monitoring department, you will lead and drive forward critical engineering initiatives and improvements to our application landscape whilst supporting and leading the engineering teams to excel in their roles. You will be closely aligned to the architecture function and delivery leads, ensuring alignment with planning and correct design and architecture governance is followed for all implementation work. You will lead by example and drive and contribute to automation and innovation initiatives with the engineering teams. Join the fight against financial crime with us! Your key responsibilities Experienced hands-on cloud and on-premise engineer, leading by example with engineering squads Thinking analytically, with systematic and logical approach to solving complex problems, and high attention to detail Design & document complex technical solutions at varying levels in an inclusive and participatory manner with a range of stakeholders Liaise and face-off directly to stakeholders in technology, business and modelling areas Collaborating with application development teams to design and prototype solutions (both on-premises and on-cloud), supporting / presenting these via the Design Authority forum for approval and providing good practice and guidelines to the teams Ensuring engineering & architecture compliance with bank-standard processes for deploying new applications, working directly with central functions such as Group Architecture, Chief Security Office and Data Governance Innovate and think creatively, showing willingness to apply new approaches to solving problems and to learn new methods, technologies and potentially outside-of-box solution Your skills and experience Proven hands-on engineering and design experience in a delivery-focused (preferably agile) environment Solid technical/engineering background, preferably with at least two high level languages and multiple relational databases or big-data technologies Proven experience with cloud technologies, preferably GCP (GKE / DataProc / CloudSQL / BigQuery), GitHub & Terraform Competence / expertise in technical skills across a wide range of technology platforms and ability to use and learn new frameworks, libraries and technologies A deep understanding of the software development life cycle and the waterfall and agile methodologies Experience leading complex engineering initiatives and engineering teams Excellent communication skills, with demonstrable ability to interface and converse at both junior and level and with non-IT staff Line management experience including working in a matrix management configuration How well support you Training and development to help you excel in your career Flexible working to assist you balance your personal priorities Coaching and support from experts in your team A culture of continuous learning to aid progression A range of flexible benefits that you can tailor to suit your needs

Posted 1 month ago

Apply

4.0 - 8.0 years

7 - 12 Lacs

Pune, Bengaluru

Work from Office

Job Title - Streamsets ETL Developer, Associate Location - Pune, India Role Description Currently DWS sources technology infrastructure, corporate functions systems [Finance, Risk, HR, Legal, Compliance, AFC, Audit, Corporate Services etc] and other key services from DB. Project Proteus aims to strategically transform DWS to an Asset Management standalone operating platform; an ambitious and ground-breaking project that delivers separated DWS infrastructure and Corporate Functions in the cloud with essential new capabilities, further enhancing DWS highly competitive and agile Asset Management capability. This role offers a unique opportunity to be part of a high performing team implementing a strategic future state technology landscape for all DWS Corporate Functions globally. We are seeking a highly skilled and motivated ETL developer (individual contributor) to join our integration team. The ETL developer will be responsible for developing, testing and maintaining robust and scalable ETL processes to support our data integration initiatives. This role requires a strong understanding of database, Unix and ETL concepts, excellent SQL skills and experience with ETL tools and databases. Your key responsibilities This role will be primarily responsible for creating good quality software using the standard coding practices. Will get involved with hands-on code development. Thorough testing of developed ETL solutions/pipelines. Do code review of other team members. Take E2E Accountability and ownership of work/projects and work with the right and robust engineering practices. Converting business requirements into technical design Delivery, Deployment, Review, Business interaction and Maintaining environments. Additionally, the role will include other responsibilities, such as: Collaborating across teams Ability to share information, transfer knowledge and expertise to team members Work closely with Stakeholders and other teams like Functional Analysis and Quality Assurance teams. Work with BA and QA to troubleshoot and resolve the reported bugs / issues on applications. Your skills and experience Bachelors Degree from an accredited college or university with a concentration in Science or an IT-related discipline (or equivalent) Hands-on experience with StreamSets, SQL Server and Unix. Experience of developing and optimizing ETL Pipelines for data ingestion, manipulation and integration. Strong proficiency in SQL, including complex queries, stored procedures, functions. Solid understanding of relational database concepts. Familiarity with data modeling concepts (Conceptual, Logical, Physical) Familiarity with HDFS, Kafka, Microservices, Splunk. Familiarity with cloud-based platforms (e.g. GCP, AWS) Experience with scripting languages (e.g. Bash, Groovy). Excellent knowledge of SQL. Experience of delivering within an agile delivery framework Experience with distributed version control tool (Git, Github, BitBucket). Experience within Jenkins or pipelines based modern CI/CD systems

Posted 1 month ago

Apply

5.0 - 10.0 years

7 - 14 Lacs

Pune

Work from Office

We are looking for a skilled Data Engineer with 5-10 years of experience to join our team in Pune. The ideal candidate will have a strong background in data engineering and excellent problem-solving skills. Roles and Responsibility Design, develop, and implement data pipelines and architectures. Collaborate with cross-functional teams to identify and prioritize project requirements. Develop and maintain large-scale data systems and databases. Ensure data quality, integrity, and security. Optimize data processing and analysis workflows. Participate in code reviews and contribute to improving overall code quality. Job Requirements Strong proficiency in programming languages such as Python or Java. Experience with big data technologies like Hadoop or Spark. Knowledge of database management systems like MySQL or NoSQL. Excellent problem-solving skills and attention to detail. Ability to work collaboratively in a team environment. Strong communication and interpersonal skills. Notice period: Immediate joiners preferred.

Posted 1 month ago

Apply

3.0 - 6.0 years

5 - 9 Lacs

Chennai

Work from Office

We are looking for a skilled Hadoop Developer with 3 to 6 years of experience to join our team at IDESLABS PRIVATE LIMITED. The ideal candidate will have expertise in developing and implementing big data solutions using Hadoop technologies. Roles and Responsibility Design, develop, and deploy scalable big data applications using Hadoop. Collaborate with cross-functional teams to identify business requirements and develop solutions. Develop and maintain large-scale data processing systems using Hadoop MapReduce. Troubleshoot and optimize performance issues in existing Hadoop applications. Participate in code reviews to ensure high-quality code standards. Stay updated with the latest trends and technologies in big data development. Job Requirements Strong understanding of Hadoop ecosystem including HDFS, YARN, and Oozie. Experience with programming languages such as Java or Python. Knowledge of database management systems such as MySQL or NoSQL. Familiarity with agile development methodologies and version control systems like Git. Excellent problem-solving skills and attention to detail. Ability to work collaboratively in a team environment and communicate effectively with stakeholders.

Posted 1 month ago

Apply

5.0 - 8.0 years

6 - 10 Lacs

Hyderabad

Work from Office

We are looking for a skilled Senior Data Engineer with 5-8 years of experience to join our team at IDESLABS PRIVATE LIMITED. The ideal candidate will have a strong background in data engineering and excellent problem-solving skills. Roles and Responsibility Design, develop, and implement large-scale data pipelines and architectures. Collaborate with cross-functional teams to identify and prioritize project requirements. Develop and maintain complex data systems and databases. Ensure data quality, integrity, and security. Optimize data processing workflows for improved performance and efficiency. Troubleshoot and resolve technical issues related to data engineering. Job Requirements Strong knowledge of data engineering principles and practices. Experience with data modeling, database design, and data warehousing. Proficiency in programming languages such as Python, Java, or C++. Excellent problem-solving skills and attention to detail. Ability to work collaboratively in a team environment. Strong communication and interpersonal skills.

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies