Jobs
Interviews

343 Etl Development Jobs - Page 14

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5 - 10 years

5 - 9 Lacs

Chennai

Work from Office

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Ab Initio Good to have skills : NA Minimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will be responsible for designing, building, and configuring applications to meet business process and application requirements. You will collaborate with teams to ensure successful project delivery and application functionality. Roles & Responsibilities: Expected to be an SME Collaborate and manage the team to perform Responsible for team decisions Engage with multiple teams and contribute on key decisions Provide solutions to problems for their immediate team and across multiple teams Lead and mentor junior professionals Conduct regular team meetings to discuss progress and challenges Stay updated on industry trends and best practices Professional & Technical Skills: Must To Have Skills: Proficiency in Ab Initio Strong understanding of ETL processes Experience with data integration and data warehousing Knowledge of data quality and data governance principles Hands-on experience with Ab Initio GDE and EME tools Additional Information: The candidate should have a minimum of 5 years of experience in Ab Initio This position is based at our Chennai office A 15 years full time education is required Qualification 15 years full time education

Posted 2 months ago

Apply

2 - 7 years

10 - 20 Lacs

Pune, Chennai, Bengaluru

Hybrid

Salary: 10- 30 LPA Exp: 2-7 years Location: Gurgaon/Pune/Bangalore/Chennai Notice period: Immediate to 30 days..!! Key Responsibilities: 2+ years hands on strong experience in Ab-Initio technology. Should have good knowledge about Ab-Initio components like Reformat, Join, Sort, Rollup, Normalize, Scan, Lookup, MFS, Ab-Initio parallelism and products like Metadata HUB, Conduct>IT, Express>IT, Control center and good to have clear understanding of concepts like Meta programming, continuous flows & PDL. Very good knowledge of Data warehouse, SQL and Unix shell scripting. Knowledge on ETL side of Cloud platforms like AWS or Azure and on Hadoop platform is also an added advantage. Experience in working with banking domain data is an added advantage. Excellent technical knowledge in Design, development & validation of complex ETL features using Ab-Initio. Excellent knowledge in integration with upstream and downstream processes and systems. Ensure compliance to technical standards, and processes. Ability to engage and collaborate with Stakeholders to deliver assigned tasks with defined quality goals. Can work independently with minimum supervision and help the development team on technical Issues. Good Communication and analytical skills.

Posted 2 months ago

Apply

12 - 16 years

35 - 40 Lacs

Bengaluru

Work from Office

As AWS Data Engineer at organization, you will play a crucial role in the design, development, and maintenance of our data infrastructure. Your work will empower data-driven decision-making and contribute to the success of our data-driven initiatives You will design and maintain scalable data pipelines using AWS data analytical resources, enabling efficient data processing and analytics. Key Responsibilities: - Highly experiences in developing ETL pipelines using AWS Glue and EMR with PySpark/Scala. - Utilize AWS services (S3, Glue, Lambda, EMR, Step Functions) for data solutions. - Design scalable data models for analytics and reporting. - Implement data validation, quality, and governance practices. - Optimize Spark jobs for cost and performance efficiency. - Automate ETL workflows with AWS Step Functions and Lambda. - Collaborate with data scientists and analysts on data needs. - Maintain documentation for data architecture and pipelines. - Experience with Open source bigdata file formats such as Iceberg or delta or Hundi - Desirable to have experience in terraforming AWS data analytical resources. Must-Have Skills: - AWS (S3, Glue, EMR Lambda, EMR), PySpark or Scala, SQL, ETL development. Good-to-Have Skills: - Snowflake, Cloudera Hadoop (HDFS, Hive, Impala), Iceberg

Posted 2 months ago

Apply

12 - 16 years

35 - 40 Lacs

Chennai

Work from Office

As AWS Data Engineer at organization, you will play a crucial role in the design, development, and maintenance of our data infrastructure. Your work will empower data-driven decision-making and contribute to the success of our data-driven initiatives You will design and maintain scalable data pipelines using AWS data analytical resources, enabling efficient data processing and analytics. Key Responsibilities: - Highly experiences in developing ETL pipelines using AWS Glue and EMR with PySpark/Scala. - Utilize AWS services (S3, Glue, Lambda, EMR, Step Functions) for data solutions. - Design scalable data models for analytics and reporting. - Implement data validation, quality, and governance practices. - Optimize Spark jobs for cost and performance efficiency. - Automate ETL workflows with AWS Step Functions and Lambda. - Collaborate with data scientists and analysts on data needs. - Maintain documentation for data architecture and pipelines. - Experience with Open source bigdata file formats such as Iceberg or delta or Hundi - Desirable to have experience in terraforming AWS data analytical resources. Must-Have Skills: - AWS (S3, Glue, EMR Lambda, EMR), PySpark or Scala, SQL, ETL development. Good-to-Have Skills: - Snowflake, Cloudera Hadoop (HDFS, Hive, Impala), Iceberg

Posted 2 months ago

Apply

12 - 16 years

35 - 40 Lacs

Mumbai

Work from Office

As AWS Data Engineer at organization, you will play a crucial role in the design, development, and maintenance of our data infrastructure. Your work will empower data-driven decision-making and contribute to the success of our data-driven initiatives You will design and maintain scalable data pipelines using AWS data analytical resources, enabling efficient data processing and analytics. Key Responsibilities: - Highly experiences in developing ETL pipelines using AWS Glue and EMR with PySpark/Scala. - Utilize AWS services (S3, Glue, Lambda, EMR, Step Functions) for data solutions. - Design scalable data models for analytics and reporting. - Implement data validation, quality, and governance practices. - Optimize Spark jobs for cost and performance efficiency. - Automate ETL workflows with AWS Step Functions and Lambda. - Collaborate with data scientists and analysts on data needs. - Maintain documentation for data architecture and pipelines. - Experience with Open source bigdata file formats such as Iceberg or delta or Hundi - Desirable to have experience in terraforming AWS data analytical resources. Must-Have Skills: - AWS (S3, Glue, EMR Lambda, EMR), PySpark or Scala, SQL, ETL development. Good-to-Have Skills: - Snowflake, Cloudera Hadoop (HDFS, Hive, Impala), Iceberg

Posted 2 months ago

Apply

12 - 16 years

35 - 40 Lacs

Kolkata

Work from Office

As AWS Data Engineer at organization, you will play a crucial role in the design, development, and maintenance of our data infrastructure. Your work will empower data-driven decision-making and contribute to the success of our data-driven initiatives You will design and maintain scalable data pipelines using AWS data analytical resources, enabling efficient data processing and analytics. Key Responsibilities: - Highly experiences in developing ETL pipelines using AWS Glue and EMR with PySpark/Scala. - Utilize AWS services (S3, Glue, Lambda, EMR, Step Functions) for data solutions. - Design scalable data models for analytics and reporting. - Implement data validation, quality, and governance practices. - Optimize Spark jobs for cost and performance efficiency. - Automate ETL workflows with AWS Step Functions and Lambda. - Collaborate with data scientists and analysts on data needs. - Maintain documentation for data architecture and pipelines. - Experience with Open source bigdata file formats such as Iceberg or delta or Hundi - Desirable to have experience in terraforming AWS data analytical resources. Must-Have Skills: - AWS (S3, Glue, EMR Lambda, EMR), PySpark or Scala, SQL, ETL development. Good-to-Have Skills: - Snowflake, Cloudera Hadoop (HDFS, Hive, Impala), Iceberg

Posted 2 months ago

Apply

5 - 7 years

0 - 0 Lacs

Thiruvananthapuram

Work from Office

Job Summary: We are seeking a highly skilled SAP BODS Data Engineer with strong expertise in ETL development and Enterprise Data Warehousing (EDW) . The ideal candidate will have a deep understanding of SAP Business Objects Data Services (BODS) and will be responsible for designing, developing, and maintaining robust data integration solutions. Key Responsibilities: Design, develop, and implement efficient ETL solutions using SAP BODS. Build and optimize SAP BODS jobs, including job design, data flows, scripting, and debugging. Develop and maintain scalable data extraction, transformation, and loading (ETL) processes from diverse data sources. Create and manage data integration workflows to ensure high performance and scalability. Collaborate closely with data architects, analysts, and business stakeholders to deliver accurate and timely data solutions. Ensure data quality and consistency across different systems and platforms. Troubleshoot and resolve data-related issues in a timely manner. Document all ETL processes and maintain technical documentation. Required Skills & Qualifications: 3+ years of hands-on experience with ETL development using SAP BODS . Strong proficiency in SAP BODS job design, data flow creation, scripting, and debugging. Solid understanding of data integration , ETL concepts , and data warehousing principles . Proficiency in SQL for data querying, data manipulation, and performance tuning. Familiarity with data modeling concepts and major database systems (e.g., Oracle, SQL Server, SAP HANA). Excellent problem-solving skills and keen attention to detail. Strong communication and interpersonal skills to facilitate effective collaboration. Ability to work independently, prioritize tasks, and manage multiple tasks in a dynamic environment. Required Skills Sap,Edw,Etl

Posted 2 months ago

Apply

8 - 13 years

4 - 8 Lacs

Gurugram

Work from Office

remote typeOn-site locationsGurugram, HR time typeFull time posted onPosted 5 Days Ago job requisition idREQ401285 Senior ETL Developer What this job involves Are you comfortable working independently without close supervision? We offer an exciting role where you can enhance your skills and play a crucial part in delivering consistent, high-quality administrative and support tasks for the EPM team The Senior ETL Developer/SSIS Administrator will lead the design of logical data models for JLL's EPM Landscape system. This role is responsible for implementing physical database structures and constructs, as well as developing operational data stores and data marts. The role entails developing and fine-tuning SQL procedures to enhance system performance. The individual will support functional tasks of medium-to-high technological complexity and build SSIS packages and transformations to meet business needs. This position contributes to maximizing the value of SSIS within the organization and collaborates with cross-functional teams to align data integration solutions with business objectives. Responsibilities The Senior ETL Developer will be responsible for: Gathering requirements and processing information to design data transformations that will effectively meet end-user needs. Designing, developing, and testing ETL processes for large-scale data extraction, transformation, and loading from source systems to the Data Warehouse and Data Marts. Creating SSIS packages to clean, prepare, and load data into the data warehouse and transfer data to EPM, ensuring data integrity and consistency throughout the ETL process. Monitoring and optimizing ETL performance and data quality. Creating routines for importing data using CSV files. Mapping disparate data sources - relational DBs, text files, Excel files - onto the target schema. Scheduling the packages to extract data at specific time intervals. Planning, coordinating, and supporting ETL processes, including architecting table structure, building ETL processes, documentation, and long-term preparedness. Extracting complex data from multiple data sources into usable and meaningful reports and analyses by implementing PL/SQL queries. Ensuring that the data architecture is scalable and maintainable. Troubleshooting data integration and data quality issues and bugs, analyzing reasons for failure, implementing optimal solutions, and revising procedures and documentation as needed. Utilizing hands-on SQL features Stored Procedures, Indexes, Partitioning, Bulk loads, DB configuration, Security/Roles, Maintenance. Developing queries and procedures, creating custom reports/views, and assisting in debugging. The developer will also be responsible for designing SSIS packages and ensuring their stability, reliability, and performance. Sounds like you? To apply, you need to have: 8+ years of experience in Microsoft SQL Server Management Studioadministration and development Bachelors degree or equivalent Competency in Microsoft Office and Smart View Experience with Microsoft SQL databases and SSIS / SSAS development. Experience working with Microsoft SSIS to create and deploy packages and deploy for ETL processes. Experience in writing and troubleshooting SQL statements, creating stored procedures, views, and SQL functions Experience with data analytics and development. Strong SQL coding experience with performance optimization experience for data queries. Experience creating and supporting SSAS Cubes. Knowledge of Microsoft PowerShell and Batch scripting Good to have Power BI development experience Strong critical and analytical thinking and problem-solving skills Ability to multi-task and thrive in fast-paced, rapidly changing, and complex environment Good written and verbal communication skills Ability to learn new skills quickly to make a measurable difference Strong team player - proven success in contributing to a team-oriented environment Excellent communication (written and oral) and interpersonal skills Excellent troubleshooting and problem resolution skills What we can do for you At JLL, we make sure that you become the best version of yourself by helping you realize your full potential in an entrepreneurial and inclusive work environment. We will empower your ambitions through our dedicated Total Rewards Program, competitive pay, and benefits package. Apply today! Location On-site Gurugram, HR Scheduled Weekly Hours 40 Job Tags: . Jones Lang LaSalle (JLL) is an Equal Opportunity Employer and is committed to working with and providing reasonable accommodations to individuals with disabilities. If you need a reasonable accommodation because of a disability for any part of the employment process including the online application and/or overall selection process you may contact us at . This email is only to request an accommodation. Please direct any other general recruiting inquiries to our page > I want to work for JLL.

Posted 2 months ago

Apply

5 - 10 years

4 - 7 Lacs

Lucknow

Work from Office

Work alongside our multidisciplinary team of developers and designers to create the next generation of enterprise software Support the entire application lifecycle (concept, design, develop, test, release and support) Special interest in SQL/Query and/or MDX/XMLA/OLAP data sources Responsible for end-to-end product development of Java-based services Work with other developers to implement best practices, introduce new tools, and improve processes Stay up to date with new technology trends Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise 15+ years of software development in a professional environment Proficiency in Java Experience integrating services with relational databases and/or OLAP data sources Knowledge and experience in relational database, OLAP and/or query planning Strong working knowledge of SQL and/or MDX/XMLA Knowledge and experience creating applications on cloud platforms (Kubernetes, RedHat OCP) Exposure to agile development, continuous integration, continuous development environment (CI/CD) with tools such asGitHub, JIRA, Jenkins, etc. Other Toolsssh clients, docker Excellent interpersonal and communication skills with ability to effectively articulate technical challenges and devise solutions Ability to work independently in a large matrix environment Preferred technical and professional experience The position requires a back-end developer with strong Java skills Experienced integrating Business Intelligence tools with relational data sources Experienced integrating Business Intelligence tools with OLAP technologies such as SAP/BW, SAP/BW4HANA Experienced defining relational or OLAP test assets -test suites, automated tests - to ensure high code coverage and tight integration with Business Intelligence tools Full lifecycle of SAP/BW and BW4HANA assets - Cube upgrade, server, and server supportadministering, maintaining, and upgrading using current SAP tooling

Posted 2 months ago

Apply

12 - 15 years

15 - 17 Lacs

Bengaluru

Work from Office

About The Role Overview Technology for today and tomorrow The Boeing India Engineering & Technology Center (BIETC) is a 5500+ engineering workforce that contributes to global aerospace growth. Our engineers deliver cutting-edge R&D, innovation, and high-quality engineering work in global markets, and leverage new-age technologies such as AI/ML, IIoT, Cloud, Model-Based Engineering, and Additive Manufacturing, shaping the future of aerospace. People-driven culture At Boeing, we believe creativity and innovation thrives when every employee is trusted, empowered, and has the flexibility to choose, grow, learn, and explore. We offer variable arrangements depending upon business and customer needs, and professional pursuits that offer greater flexibility in the way our people work. We also believe that collaboration, frequent team engagements, and face-to-face meetings bring together different perspectives and thoughts enabling every voice to be heard and every perspective to be respected. No matter where or how our teammates work, we are committed to positively shaping peoples careers and being thoughtful about employee wellbeing. Boeing India Software Engineering team is currently looking for one Lead Software Engineer Developer to join their team in Bengaluru, KA. As a ETL Developer , you will be part of the Application Solutions team, which develops software applications and Digital products that create direct value to its customers. We provide re-vamped work environments focused on delivering data-driven solutions at a rapidly increased pace over traditional development. Be a part of our passionate and motivated team who are excited to use the latest in software technologies for modern web and mobile application development. Through our products we deliver innovative solutions to our global customer base at an accelerated pace. Position Responsibilities: Perform data mining and collection procedures. Ensure data quality and integrity, Interpret and analyze data problems. Visualize data and create reports. Experiments with new models and techniques Determines how data can be used to achieve customer / user goals. Designs data modeling processes Create algorithms and predictive models to for analysis. Enables development of prediction engines, pattern detection analysis, and optimization algorithms, etc. Develops guidance for analytics-based wireframes. Organizes and conducts data assessments. Discovers insights from structured and unstructured data. Estimate user stories/features (story point estimation) and tasks in hours with the required level of accuracy and commit them as part of Sprint Planning. Contributes to the backlog grooming meetings by promptly asking relevant questions to ensure requirements achieve the right level of DOR. Raise any impediments/risks (technical/operational/personal) they come across and approaches Scrum Master/Technical Architect/PO accordingly to arrive at a solution. Update the status and the remaining efforts for their tasks on a daily basis. Ensures change requests are treated correctly and tracked in the system, impact analysis done, and risks/timelines are appropriately communicated. Hands-on experience in understanding aerospace domain specific data Must coordinate with data scientists in data preparation, exploration and making data ready. Must have clear understanding of defining data products and monetizing. Must have experience in building self-service capabilities to users. Build quality checks across the data lineage and responsible in designing and implementing different data patterns. Can influence different stakeholders for funding and building the vision of the product in terms of usage, productivity, and scalability of the solutions. Build impactful or outcome-based solutions/products. Basic Qualifications (Required Skills/Experience): Bachelors or masters degree as BASIC QUALIFICATION 12-15 years of experience as a data engineer. Expertise in SQL, Python, Knowledge of Java, Oracle, R, Data modeling, Power BI. Experience in understanding and interacting with multiple data formats. Ability to rapidly learn and understand software from source code. Expertise in understanding, analyzing & optimizing large, complicated SQL statements Strong knowledge and experience in SQL Server, database design and ETL queries. Develop software models to simulate real world problems to help operational leaders understand on which variables to focus. Candidate should have proficiency to streamline and optimize databases for efficient and consistent data consumption. Strong understanding of Datawarehouse concepts, data lake, data mesh Familiar with ETL tools and Data ingestion patterns Hands on experience in building data pipelines using GCP. Hands on experience in writing complex SQL (No- SQL is a big plus) Hands on experience with data pipeline orchestration tools such as Airflow/GCP Composer Hands on experience on Data Modelling Experience in leading teams with diversity Experience in performance tuning of large datawarehouse/datalakes. Exposure to prompt engineering, LLMs, and vector DB. Python, SQL and Pyspark Spark Ecosystem (Spark Core, Spark Streaming, Spark SQL) / Databricks Azure (ADF, ADB, Logic Apps, Azure SQL database, Azure Key Vaults, ADLS, Synapse) Preferred Qualifications [Required Skills/Experience] PubSUB, Terraform Deep Learning - Tensor flow Time series, BI/Visualization Tools - Power BI and Tablaeu, Languages - R/Phython Deep Learning - Tensor flow Machine Learning NLP Typical Education & Experience Education/experience typically acquired through advanced education (e.g. Bachelor) and typically 12 to 15 years' related work experience or an equivalent combination of education and experience (e.g. Master+11 years of related work experience etc.) Relocation This position does offer relocation within INDIA. Export Control Requirements This is not an Export Control position. Education Bachelor's Degree or Equivalent Required Relocation This position offers relocation based on candidate eligibility. Visa Sponsorship Employer will not sponsor applicants for employment visa status. Shift Not a Shift Worker (India)

Posted 2 months ago

Apply

7 - 12 years

15 - 30 Lacs

Hyderabad, Chennai, Bengaluru

Work from Office

Warm Greetings from SP Staffing Services Pvt Ltd!!!! Experience:4-13yrs Work Location :Bglre/Hybed/Chennai/Pune/Kolkata Participates in ETL Design of new or changing mappings and workflows with the team and prepares technical specifications. - Creates ETL Mappings, Mapplets, Workflows, Worklets using Informatica PowerCenter 10.x and prepare corresponding documentation. - Designs and builds integrations supporting standard data warehousing objects (type-2 dimensions, aggregations, star schema, etc.). - Performs source system analysis as required. - Works with DBAs and Data Architects to plan and implement appropriate data partitioning strategy in Enterprise Data Warehouse. - Implements versioning of the ETL repository and supporting code as necessary. - Develops stored procedures, database triggers and SQL queries where needed. - Implements best practices and tunes SQL code for optimization. - Loads data from SF Power Exchange to Relational database using Informatica. - Works with XML's, XML parser, Java and HTTP transformation within Informatica. - Works with Informatica Data Quality (Analyst and Developer) - Primary skill is Informatica PowerCenter Interested candidates, Kindly share your updated resume to ramya.r@spstaffing.in or contact number 8667784354 (Whatsapp:9597467601) to proceed further

Posted 2 months ago

Apply

5 - 8 years

8 - 18 Lacs

Hyderabad, Chennai, Bengaluru

Hybrid

Dear Candidate Wonderful Job opportunity for ETL Developer Location -Hyderabad Greetings from LTIMindtree !!! I Saw your profile on Naukri I was really impressed by your experience as a Advanced SQL As you know LTIMindtree is growing rapidly and want to be a part of this journey. We are currently looking someone like you to join back our team.

Posted 2 months ago

Apply

5 - 10 years

5 - 15 Lacs

Mumbai Suburban, Navi Mumbai, Mumbai (All Areas)

Work from Office

Solid SQL skills, experience writing scripts for automation using python or shell script and hands on experience using any ETL tool like SSIS, Informatica etc. Role & responsibilities • Develop, design, and maintain interactive and visually compelling reports, dashboards, and analytics solutions using tools such as Power BI, and AWS Quicksight. • Collaborate with business stakeholders, data engineers, and analysts to gather requirements, understand data sources, and ensure alignment of reporting solutions with business needs. • Write simple to complex SQL queries to extract, transform, and manipulate data from various sources, ensuring accuracy, efficiency, and performance. • Identify and troubleshoot data quality and integration issues, proposing effective solutions to enhance data accuracy and reliability within reporting solutions. • Stay up-to-date with industry trends and emerging technologies related to reporting, analytics, and data visualization to continually improve and innovate reporting practices. • Work closely with the development team to integrate reporting solutions into existing applications or systems. • Perform data analysis to identify trends, patterns, and insights, providing valuable information to business stakeholders. • Collaborate with the team to document data models, report specifications, and technical processes. • Participate actively in team meetings, discussions, and knowledge-sharing sessions, contributing to the growth of the team's capabilities. Preferred candidate profile Strong proficiency in SQL with the ability to write complex queries and optimize query performance. Extensive experience with data visualization tools such as Tableau and Power BI; familiarity with AWS Quick sight is a plus. Solid understanding of data warehousing concepts, data modeling, and ETL processes. Exceptional problem-solving skills and the ability to find innovative solutions to technical challenges. English B2 Level or Higher Basic to intermediate Python and Machine learning knowledge is a plus Knowledge of AI is a plus

Posted 2 months ago

Apply

6 - 11 years

30 - 35 Lacs

Indore, Hyderabad, Delhi / NCR

Work from Office

Support enhancements to the MDM platform Track System Performance Troubleshoot issues Resolve production issues Required Candidate profile 5+ years in Python and advanced SQL including profiling, refactoring Experience with REST API and Hands on Azure Databricks and ADF Experience with Markit EDM or Semarchy

Posted 2 months ago

Apply

2 - 5 years

4 - 8 Lacs

Pune

Work from Office

About The Role Process Manager - AWS Data Engineer Mumbai/Pune| Full-time (FT) | Technology Services Shift Timings - EMEA(1pm-9pm)|Management Level - PM| Travel Requirements - NA The ideal candidate must possess in-depth functional knowledge of the process area and apply it to operational scenarios to provide effective solutions. The role enables to identify discrepancies and propose optimal solutions by using a logical, systematic, and sequential methodology. It is vital to be open-minded towards inputs and views from team members and to effectively lead, control, and motivate groups towards company objects. Additionally, candidate must be self-directed, proactive, and seize every opportunity to meet internal and external customer needs and achieve customer satisfaction by effectively auditing processes, implementing best practices and process improvements, and utilizing the frameworks and tools available. Goals and thoughts must be clearly and concisely articulated and conveyed, verbally and in writing, to clients, colleagues, subordinates, and supervisors. Process Manager Roles and responsibilities: Understand clients requirement and provide effective and efficient solution in AWS using Snowflake. Assembling large, complex sets of data that meet non-functional and functional business requirements Using Snowflake / Redshift Architect and design to create data pipeline and consolidate data on data lake and Data warehouse. Demonstrated strength and experience in data modeling, ETL development and data warehousing concepts Understanding data pipelines and modern ways of automating data pipeline using cloud based Testing and clearly document implementations, so others can easily understand the requirements, implementation, and test conditions Perform data quality testing and assurance as a part of designing, building and implementing scalable data solutions in SQL Technical and Functional Skills: AWS ServicesStrong experience with AWS data services such as S3, EC2, Redshift, Glue, Athena, EMR, etc. Programming LanguagesProficiency in programming languages commonly used in data engineering such as Python, SQL, Scala, or Java. Data WarehousingExperience in designing, implementing, and optimizing data warehouse solutions on Snowflake/ Amazon Redshift. ETL ToolsFamiliarity with ETL tools and frameworks (e.g., Apache Airflow, AWS Glue) for building and managing data pipelines. Database ManagementKnowledge of database management systems (e.g., PostgreSQL, MySQL, Amazon Redshift) and data lake concepts. Big Data TechnologiesUnderstanding of big data technologies such as Hadoop, Spark, Kafka, etc., and their integration with AWS. Version ControlProficiency in version control tools like Git for managing code and infrastructure as code (e.g., CloudFormation, Terraform). Problem-solving Skills: Ability to analyze complex technical problems and propose effective solutions. Communication Skills: Strong verbal and written communication skills for documenting processes and collaborating with team members and stakeholders. Education and ExperienceTypically, a bachelors degree in Computer Science, Engineering, or a related field is required, along with 5+ years of experience in data engineering and AWS cloud environments. About eClerx eClerx is a global leader in productized services, bringing together people, technology and domain expertise to amplify business results. Our mission is to set the benchmark for client service and success in our industry. Our vision is to be the innovation partner of choice for technology, data analytics and process management services. Since our inception in 2000, we've partnered with top companies across various industries, including financial services, telecommunications, retail, and high-tech. Our innovative solutions and domain expertise help businesses optimize operations, improve efficiency, and drive growth. With over 18,000 employees worldwide, eClerx is dedicated to delivering excellence through smart automation and data-driven insights. At eClerx, we believe in nurturing talent and providing hands-on experience. About eClerx Technology eClerxs Technology Group collaboratively delivers Analytics, RPA, AI, and Machine Learning digital technologies that enable our consultants to help businesses thrive in a connected world. Our consultants and specialists partner with our global clients and colleagues to build and implement digital solutions through a broad spectrum of activities. To know more about us, visit https://eclerx.com eClerx is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability or protected veteran status, or any other legally protected basis, per applicable law

Posted 2 months ago

Apply

7 - 12 years

30 - 40 Lacs

Pune, Bengaluru, Delhi / NCR

Hybrid

Support enhancements to the MDM platform Develop pipelines using snowflake python SQL and airflow Track System Performance Troubleshoot issues Resolve production issues Required Candidate profile 5+ years of hands on expert level Snowflake, Python, orchestration tools like Airflow Good understanding of investment domain Experience with dbt, Cloud experience (AWS, Azure) DevOps

Posted 2 months ago

Apply

1 - 3 years

1 - 5 Lacs

Pune

Work from Office

Job Title : Data Analyst- (SQL & ETL Process) Experience : 1 to 3 Years Location : Pune Industry : Investment Banking / Capital Markets Employment Type : Full Time Role & responsibilities -Collaborate with business and technical teams to gather and analyze requirements for models and data jobs. -Develop detailed technical specifications and data mappings for data models and system interfaces. -Design and develop efficient SQL queries; expertise in Complex SQL , PL/SQL , or T-SQL is essential. -Optimize and tune SQL for high performance and large datasets. -Build and maintain dashboards and reports using Power BI and/or Qlik . -Work with modern cloud databases like Snowflake and handle semi-structured data formats including JSON and XML . -Support and implement API-based data exchange; familiarity with XML , XSD , and REST APIs is a plus. -Develop conceptual, logical, and physical data models; strong understanding of ER modeling , dimensional modeling , and granularity . Collaborate in cloud-based environments, primarily using Azure and Snowflake . Key Skills: SQL, PL/SQL, T-SQL SQL Performance Tuning Power BI, Qlik Data Modeling (ER, Dimensional) JSON, XML, XSD API Integration (preferred) Azure, Snowflake (Cloud Platforms) Preferred Qualifications: Bachelors degree in Computer Science, Engineering, or a related field. Experience working in investment banking or capital markets domains is highly preferred . Excellent problem-solving skills and ability to work independently in a fast-paced environment.

Posted 2 months ago

Apply

15 - 20 years

3 - 7 Lacs

Bengaluru

Work from Office

Project Role : Data Management Practitioner Project Role Description : Maintain the quality and compliance of an organizations data assets. Design and implement data strategies, ensuring data integrity and enforcing governance policies. Establish protocols to handle data, safeguard sensitive information, and optimize data usage within the organization. Design and advise on data quality rules and set up effective data compliance policies. Must have skills : Teradata Vantage Good to have skills : Data Architecture Principles, Teradata BI, Amazon Web Services (AWS) Minimum 15 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Management Practitioner, you will be responsible for maintaining the quality and compliance of an organization's data assets. You will design and implement data strategies, ensure data integrity, enforce governance policies, and optimize data usage within the organization. Roles & Responsibilities: Expected to be a SME with deep knowledge and experience. Should have Influencing and Advisory skills. Responsible for team decisions. Engage with multiple teams and contribute on key decisions. Expected to provide solutions to problems that apply across multiple teams. Design and implement data quality rules. Advise on data compliance policies. Develop protocols to handle and safeguard sensitive data. Professional & Technical Skills: Must To Have Skills: Proficiency in Teradata Vantage. Good To Have Skills: Experience with Data Architecture Principles, Amazon Web Services (AWS), Teradata BI. Strong understanding of data management principles. Experience in implementing data governance policies. Knowledge of data integrity and compliance standards. Additional Information: The candidate should have a minimum of 15 years of experience in Teradata Vantage. This position is based at our Bengaluru office. A 15 years full-time education is required. Qualification 15 years full time education

Posted 2 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies