Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
2.0 - 5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
The Applications Development Programmer Analyst is an intermediate level position responsible for participation in the establishment and implementation of new or revised application systems and programs in coordination with the Technology team. The overall objective of this role is to contribute to applications systems analysis and programming activities. Responsibilities: Utilize knowledge of applications development procedures and concepts, and basic knowledge of other technical areas to identify and define necessary system enhancements Identify and analyze issues, make recommendations, and implement solutions Utilize knowledge of business processes, system processes, and industry standards to solve complex issues Analyze information and make evaluative judgements to recommend solutions and improvements Conduct testing and debugging, utilize script tools, and write basic code for design specifications Assess applicability of similar experiences and evaluate options under circumstances not covered by procedures Develop working knowledge of Citi’s information systems, procedures, standards, client server application development, network operations, database administration, systems administration, data center operations, and PC-based applications Appropriately assess risk when business decisions are made, demonstrating particular consideration for the firm's reputation and safeguarding Citigroup, its clients and assets, by driving compliance with applicable laws, rules and regulations, adhering to Policy, applying sound ethical judgment regarding personal behavior, conduct and business practices, and escalating, managing and reporting control issues with transparency. Qualifications: The person should have 2 to 5 years of Application development experience through full lifecycle of Java and Big data applications The primary area of Experience with Core Java/J2EE Application The candidate should be commendable in Data Structures and Algorithms. She/He should Thorough knowledge and hands on experience in following technologies Hadoop, Map Reduce Framework, Spark, YARN, Sqoop, Pig , Hue, Unix, Java, Sqoop, Impala, Cassandra on Mesos. Cloudera certification (CCDH) is an added advantage. She/He should have implemented or part complex project execution in Big Data Spark eco system, where processing volumes of data thorough understanding of distributed processing and integrated applications. Exposure to ETL and BI tools will be good. Work in an agile environment following through the best practices of agile Scrum. Expertise in troubleshooting and problem solving. Expertise in Test driven development Education: Bachelor’s degree/University degree or equivalent experience This job description provides a high-level review of the types of work performed. Other job-related duties may be assigned as required. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Applications Development ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster. Show more Show less
Posted 1 week ago
10.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Job Title: Lead Data Engineer Job Summary The Lead Data Engineer will provide technical expertise in analysis, design, development, rollout and maintenance of data integration initiatives. This role will contribute to implementation methodologies and best practices, as well as work on project teams to analyse, design, develop and deploy business intelligence / data integration solutions to support a variety of customer needs. This position oversees a team of Data Integration Consultants at various levels, ensuring their success on projects, goals, trainings and initiatives though mentoring and coaching. Provides technical expertise in needs identification, data modelling, data movement and transformation mapping (source to target), automation and testing strategies, translating business needs into technical solutions with adherence to established data guidelines and approaches from a business unit or project perspective whilst leveraging best fit technologies (e.g., cloud, Hadoop, NoSQL, etc.) and approaches to address business and environmental challenges Works with stakeholders to identify and define self-service analytic solutions, dashboards, actionable enterprise business intelligence reports and business intelligence best practices. Responsible for repeatable, lean and maintainable enterprise BI design across organizations. Effectively partners with client team. Leadership not only in the conventional sense, but also within a team we expect people to be leaders. Candidate should elicit leadership qualities such as Innovation, Critical thinking, optimism/positivity, Communication, Time Management, Collaboration, Problem-solving, Acting Independently, Knowledge sharing and Approachable. Responsibilities Design, develop, test, and deploy data integration processes (batch or real-time) using tools such as Microsoft SSIS, Azure Data Factory, Databricks, Matillion, Airflow, Sqoop, etc. Create functional & technical documentation – e.g. ETL architecture documentation, unit testing plans and results, data integration specifications, data testing plans, etc. Provide a consultative approach with business users, asking questions to understand the business need and deriving the data flow, conceptual, logical, and physical data models based on those needs. Perform data analysis to validate data models and to confirm ability to meet business needs. May serve as project or DI lead, overseeing multiple consultants from various competencies Stays current with emerging and changing technologies to best recommend and implement beneficial technologies and approaches for Data Integration Ensures proper execution/creation of methodology, training, templates, resource plans and engagement review processes Coach team members to ensure understanding on projects and tasks, providing effective feedback (critical and positive) and promoting growth opportunities when appropriate. Coordinate and consult with the project manager, client business staff, client technical staff and project developers in data architecture best practices and anything else that is data related at the project or business unit levels Architect, design, develop and set direction for enterprise self-service analytic solutions, business intelligence reports, visualisations and best practice standards. Toolsets include but not limited to: SQL Server Analysis and Reporting Services, Microsoft Power BI, Tableau and Qlik. Work with report team to identify, design and implement a reporting user experience that is consistent and intuitive across environments, across report methods, defines security and meets usability and scalability best practices. Required Qualifications 10 Years industry implementation experience with data integration tools such as AWS services Redshift, Athena, Lambda, Glue, S3, ETL, etc. 5-8 years of management experience required 5-8 years consulting experience preferred Minimum of 5 years of data architecture, data modelling or similar experience Bachelor’s degree or equivalent experience, Master’s Degree Preferred Strong data warehousing, OLTP systems, data integration and SDLC Strong experience in orchestration & working experience cloud native / 3 rd party ETL data load orchestration Understanding and experience with major Data Architecture philosophies (Dimensional, ODS, Data Vault, etc.) Understanding of on premises and cloud infrastructure architectures (e.g. Azure, AWS, GCP) Strong experience in Agile Process (Scrum cadences, Roles, deliverables) & working experience in either Azure DevOps, JIRA or Similar with Experience in CI/CD using one or more code management platforms Strong databricks experience required to create notebooks in pyspark Experience using major data modelling tools (examples: ERwin, ER/Studio, PowerDesigner, etc.) Experience with major database platforms (e.g. SQL Server, Oracle, Azure Data Lake, Hadoop, Azure Synapse/SQL Data Warehouse, Snowflake, Redshift etc.) Strong experience in orchestration & working experience in either Data Factory or HDInsight or Data Pipeline or Cloud composer or Similar Understanding and experience with major Data Architecture philosophies (Dimensional, ODS, Data Vault, etc.) Understanding of modern data warehouse capabilities and technologies such as real-time, cloud, Big Data. Understanding of on premises and cloud infrastructure architectures (e.g. Azure, AWS, GCP) Strong experience in Agile Process (Scrum cadences, Roles, deliverables) & working experience in either Azure DevOps, JIRA or Similar with Experience in CI/CD using one or more code management platforms 3-5 years’ development experience in decision support / business intelligence environments utilizing tools such as SQL Server Analysis and Reporting Services, Microsoft’s Power BI, Tableau, looker etc. Preferred Skills & Experience Knowledge and working experience with Data Integration processes, such as Data Warehousing, EAI, etc. Experience in providing estimates for the Data Integration projects including testing, documentation, and implementation Ability to analyse business requirements as they relate to the data movement and transformation processes, research, evaluation and recommendation of alternative solutions. Ability to provide technical direction to other team members including contractors and employees. Ability to contribute to conceptual data modelling sessions to accurately define business processes, independently of data structures and then combines the two together. Proven experience leading team members, directly or indirectly, in completing high-quality major deliverables with superior results Demonstrated ability to serve as a trusted advisor that builds influence with client management beyond simply EDM. Can create documentation and presentations such that the they “stand on their own” Can advise sales on evaluation of Data Integration efforts for new or existing client work. Can contribute to internal/external Data Integration proof of concepts. Demonstrates ability to create new and innovative solutions to problems that have previously not been encountered. Ability to work independently on projects as well as collaborate effectively across teams Must excel in a fast-paced, agile environment where critical thinking and strong problem solving skills are required for success Strong team building, interpersonal, analytical, problem identification and resolution skills Experience working with multi-level business communities Can effectively utilise SQL and/or available BI tool to validate/elaborate business rules. Demonstrates an understanding of EDM architectures and applies this knowledge in collaborating with the team to design effective solutions to business problems/issues. Effectively influences and, at times, oversees business and data analysis activities to ensure sufficient understanding and quality of data. Demonstrates a complete understanding of and utilises DSC methodology documents to efficiently complete assigned roles and associated tasks. Deals effectively with all team members and builds strong working relationships/rapport with them. Understands and leverages a multi-layer semantic model to ensure scalability, durability, and supportability of the analytic solution. Understands modern data warehouse concepts (real-time, cloud, Big Data) and how to enable such capabilities from a reporting and analytic stand-point. Demonstrated ability to serve as a trusted advisor that builds influence with client management beyond simply EDM. Show more Show less
Posted 1 week ago
10.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Job Title: Lead Data Engineer Job Summary The Lead Data Engineer will provide technical expertise in analysis, design, development, rollout and maintenance of data integration initiatives. This role will contribute to implementation methodologies and best practices, as well as work on project teams to analyse, design, develop and deploy business intelligence / data integration solutions to support a variety of customer needs. This position oversees a team of Data Integration Consultants at various levels, ensuring their success on projects, goals, trainings and initiatives though mentoring and coaching. Provides technical expertise in needs identification, data modelling, data movement and transformation mapping (source to target), automation and testing strategies, translating business needs into technical solutions with adherence to established data guidelines and approaches from a business unit or project perspective whilst leveraging best fit technologies (e.g., cloud, Hadoop, NoSQL, etc.) and approaches to address business and environmental challenges Works with stakeholders to identify and define self-service analytic solutions, dashboards, actionable enterprise business intelligence reports and business intelligence best practices. Responsible for repeatable, lean and maintainable enterprise BI design across organizations. Effectively partners with client team. Leadership not only in the conventional sense, but also within a team we expect people to be leaders. Candidate should elicit leadership qualities such as Innovation, Critical thinking, optimism/positivity, Communication, Time Management, Collaboration, Problem-solving, Acting Independently, Knowledge sharing and Approachable. Responsibilities Design, develop, test, and deploy data integration processes (batch or real-time) using tools such as Microsoft SSIS, Azure Data Factory, Databricks, Matillion, Airflow, Sqoop, etc. Create functional & technical documentation – e.g. ETL architecture documentation, unit testing plans and results, data integration specifications, data testing plans, etc. Provide a consultative approach with business users, asking questions to understand the business need and deriving the data flow, conceptual, logical, and physical data models based on those needs. Perform data analysis to validate data models and to confirm ability to meet business needs. May serve as project or DI lead, overseeing multiple consultants from various competencies Stays current with emerging and changing technologies to best recommend and implement beneficial technologies and approaches for Data Integration Ensures proper execution/creation of methodology, training, templates, resource plans and engagement review processes Coach team members to ensure understanding on projects and tasks, providing effective feedback (critical and positive) and promoting growth opportunities when appropriate. Coordinate and consult with the project manager, client business staff, client technical staff and project developers in data architecture best practices and anything else that is data related at the project or business unit levels Architect, design, develop and set direction for enterprise self-service analytic solutions, business intelligence reports, visualisations and best practice standards. Toolsets include but not limited to: SQL Server Analysis and Reporting Services, Microsoft Power BI, Tableau and Qlik. Work with report team to identify, design and implement a reporting user experience that is consistent and intuitive across environments, across report methods, defines security and meets usability and scalability best practices. Required Qualifications 10 Years industry implementation experience with data integration tools such as AWS services Redshift, Athena, Lambda, Glue, S3, ETL, etc. 5-8 years of management experience required 5-8 years consulting experience preferred Minimum of 5 years of data architecture, data modelling or similar experience Bachelor’s degree or equivalent experience, Master’s Degree Preferred Strong data warehousing, OLTP systems, data integration and SDLC Strong experience in orchestration & working experience cloud native / 3 rd party ETL data load orchestration Understanding and experience with major Data Architecture philosophies (Dimensional, ODS, Data Vault, etc.) Understanding of on premises and cloud infrastructure architectures (e.g. Azure, AWS, GCP) Strong experience in Agile Process (Scrum cadences, Roles, deliverables) & working experience in either Azure DevOps, JIRA or Similar with Experience in CI/CD using one or more code management platforms Strong databricks experience required to create notebooks in pyspark Experience using major data modelling tools (examples: ERwin, ER/Studio, PowerDesigner, etc.) Experience with major database platforms (e.g. SQL Server, Oracle, Azure Data Lake, Hadoop, Azure Synapse/SQL Data Warehouse, Snowflake, Redshift etc.) Strong experience in orchestration & working experience in either Data Factory or HDInsight or Data Pipeline or Cloud composer or Similar Understanding and experience with major Data Architecture philosophies (Dimensional, ODS, Data Vault, etc.) Understanding of modern data warehouse capabilities and technologies such as real-time, cloud, Big Data. Understanding of on premises and cloud infrastructure architectures (e.g. Azure, AWS, GCP) Strong experience in Agile Process (Scrum cadences, Roles, deliverables) & working experience in either Azure DevOps, JIRA or Similar with Experience in CI/CD using one or more code management platforms 3-5 years’ development experience in decision support / business intelligence environments utilizing tools such as SQL Server Analysis and Reporting Services, Microsoft’s Power BI, Tableau, looker etc. Preferred Skills & Experience Knowledge and working experience with Data Integration processes, such as Data Warehousing, EAI, etc. Experience in providing estimates for the Data Integration projects including testing, documentation, and implementation Ability to analyse business requirements as they relate to the data movement and transformation processes, research, evaluation and recommendation of alternative solutions. Ability to provide technical direction to other team members including contractors and employees. Ability to contribute to conceptual data modelling sessions to accurately define business processes, independently of data structures and then combines the two together. Proven experience leading team members, directly or indirectly, in completing high-quality major deliverables with superior results Demonstrated ability to serve as a trusted advisor that builds influence with client management beyond simply EDM. Can create documentation and presentations such that the they “stand on their own” Can advise sales on evaluation of Data Integration efforts for new or existing client work. Can contribute to internal/external Data Integration proof of concepts. Demonstrates ability to create new and innovative solutions to problems that have previously not been encountered. Ability to work independently on projects as well as collaborate effectively across teams Must excel in a fast-paced, agile environment where critical thinking and strong problem solving skills are required for success Strong team building, interpersonal, analytical, problem identification and resolution skills Experience working with multi-level business communities Can effectively utilise SQL and/or available BI tool to validate/elaborate business rules. Demonstrates an understanding of EDM architectures and applies this knowledge in collaborating with the team to design effective solutions to business problems/issues. Effectively influences and, at times, oversees business and data analysis activities to ensure sufficient understanding and quality of data. Demonstrates a complete understanding of and utilises DSC methodology documents to efficiently complete assigned roles and associated tasks. Deals effectively with all team members and builds strong working relationships/rapport with them. Understands and leverages a multi-layer semantic model to ensure scalability, durability, and supportability of the analytic solution. Understands modern data warehouse concepts (real-time, cloud, Big Data) and how to enable such capabilities from a reporting and analytic stand-point. Demonstrated ability to serve as a trusted advisor that builds influence with client management beyond simply EDM. Show more Show less
Posted 1 week ago
4.0 - 9.0 years
5 - 9 Lacs
Bengaluru
Work from Office
We are seeking a highly skilled Snowflake Developer to join our team in Bangalore. The ideal candidate will have extensive experience in designing, implementing, and managing Snowflake-based data solutions. This role involves developing data architectures and ensuring the effective use of Snowflake to drive business insights and innovation. Key Responsibilities: Design and implement scalable, efficient, and secure Snowflake solutions to meet business requirements. Develop data architecture frameworks, standards, and principles, including modeling, metadata, security, and reference data. Implement Snowflake-based data warehouses, data lakes, and data integration solutions. Manage data ingestion, transformation, and loading processes to ensure data quality and performance. Collaborate with business stakeholders and IT teams to develop data strategies and ensure alignment with business goals. Drive continuous improvement by leveraging the latest Snowflake features and industry trends. Qualifications: Bachelor s or Master s degree in Computer Science, Information Technology, Data Science, or a related field. 4+ years of experience in data architecture, data engineering, or a related field. Extensive experience with Snowflake, including designing and implementing Snowflake-based solutions. Must be strong in SQL Proven track record of contributing to data projects and working in complex environments. Familiarity with cloud platforms (e.g., AWS, GCP) and their data services. Snowflake certification (e.g., SnowPro Core, SnowPro Advanced) is a plus.
Posted 1 week ago
6.0 - 11.0 years
6 - 10 Lacs
Bengaluru
Work from Office
We are seeking a highly skilled Snowflake Developer to join our team in Bangalore. The ideal candidate will have extensive experience in designing, implementing, and managing Snowflake-based data solutions. This role involves developing data architectures and ensuring the effective use of Snowflake to drive business insights and innovation. Key Responsibilities: Design and implement scalable, efficient, and secure Snowflake solutions to meet business requirements. Develop data architecture frameworks, standards, and principles, including modeling, metadata, security, and reference data. Implement Snowflake-based data warehouses, data lakes, and data integration solutions. Manage data ingestion, transformation, and loading processes to ensure data quality and performance. Collaborate with business stakeholders and IT teams to develop data strategies and ensure alignment with business goals. Drive continuous improvement by leveraging the latest Snowflake features and industry trends. Qualifications: Bachelor s or Master s degree in Computer Science, Information Technology, Data Science, or a related field. 6+ years of experience in data architecture, data engineering, or a related field. Extensive experience with Snowflake, including designing and implementing Snowflake-based solutions. Must have exposure working in Airflow Proven track record of contributing to data projects and working in complex environments. Familiarity with cloud platforms (e.g., AWS, GCP) and their data services. Snowflake certification (e.g., SnowPro Core, SnowPro Advanced) is a plus.
Posted 1 week ago
4.0 - 9.0 years
9 - 13 Lacs
Bengaluru
Work from Office
About Us As a Fortune 50 company with more than 400,000 team members worldwide, Target is an iconic brand and one of America's leading retailers. At Target, we have a timeless purpose and a proven strategy and that hasn t happened by accident. Some of the best minds from diverse backgrounds come together at Target to redefine retail in an inclusive learning environment that values people and delivers world-class outcomes. That winning formula is especially apparent in Bengaluru, where Target in India operates as a fully integrated part of Target s global team and has more than 4,000 team members supporting the company s global strategy and operations. Target Tech Overview Every time a guest enters a Target store or browses Target.com , they experience the impact of Target s investments in technology and innovation. We re the technologists behind one of the most loved retail brands, delivering joy to millions of our guests, team members, and communities. Our global in-house technology team of more than 5,000 of engineers, data scientists, architects, coaches and product managers strive to make Target the most convenient, safe and joyful place to shop. We use agile practices and leverage open-source software to adapt and build best-in-class technology for our team members and guests and we do so with a focus on diversity and inclusion, experimentation and continuous learning. Pyramid Overview Our Product Engineering teams fuel Target s business with cutting-edge technology to deliver incredible experiences and value for guests and team members. Using a responsive architecture platform, we build and deploy industry-leading technology enabling Target to operate efficiently, securely, and reliably from the inside out. We work across Target, developing comprehensive product strategies, leveraging enterprise and guest feedback to set the standard for best in retail. Position Overview 4+ years of experience in software design & development with 3+ years of experience in building scalable backend applications using Java Demonstrates broad and deep expertise in Java/Kotlin and frameworks. Designs, develops, and approves end-to-end functionality of a product line, platform, or infrastructure. Communicates and coordinates with project team, partners, and stakeholders. Demonstrates expertise in analysis and optimization of systems capacity, performance, and operational health. Maintains deep technical knowledge within areas of expertise. Stays current with new and evolving technologies via formal training and self-directed education. Experience integrating with third party and opensource frameworks. About You 4 year degree or equivalent experience Experience4 -7 years Programming experience with Java - Springboot & Kotlin - micronaut Strong problem-solving skills with a good understanding of data structures and algorithms. Must have exposure to non-relational databases like MongoDB. Must have exposure to distributed systems and microservice architecture. Good to Have exposure to Data Pipeline, ML Ops, Spark, Python Demonstrates a solid understanding of the impact of own work on the team and/or guests Writes and organizes code using multiple computer languages, including distributed programming and understand different frameworks and paradigm Delivers high-performance, scalable, repeatable, and secure deliverables with broad impact (high throughput and low latency) Influences and applies data standards, policies, and procedures Maintains technical knowledge within areas of expertise Stays current with new and evolving technologies via formal training and self-directed education. Know More About Us Here: Life at Target- https://india.target.com/ Benefits- https://india.target.com/life-at-target/workplace/benefits Follow us on social media https://www.linkedin.com/company/target/ Target Tech- https://tech.target.com/
Posted 1 week ago
8.0 - 12.0 years
4 - 8 Lacs
Pune
Work from Office
Job Information Job Opening ID ZR_1581_JOB Date Opened 25/11/2022 Industry Technology Job Type Work Experience 8-12 years Job Title Senior Specialist- Data Engineer City Pune Province Maharashtra Country India Postal Code 411001 Number of Positions 4 Location:Pune/ Mumbai/ Bangalore/ Chennai Roles & Responsibilities: Total 8-10 years of working experience Experience/Needs 8-10 Years of experience with big data tools like Spark, Kafka, Hadoop etc. Design and deliver consumer-centric high performant systems. You would be dealing with huge volumes of data sets arriving through batch and streaming platforms. You will be responsible to build and deliver data pipelines that process, transform, integrate and enrich data to meet various demands from business Mentor team on infrastructural, networking, data migration, monitoring and troubleshooting aspects Focus on automation using Infrastructure as a Code (IaaC), Jenkins, devOps etc. Design, build, test and deploy streaming pipelines for data processing in real time and at scale Experience with stream-processing systems like Storm, Spark-Streaming, Flink etc.. Experience with object-oriented/object function scripting languagesScala, Java, etc. Develop software systems using test driven development employing CI/CD practices Partner with other engineers and team members to develop software that meets business needs Follow Agile methodology for software development and technical documentation Good to have banking/finance domain knowledge Strong written and oral communication, presentation and interpersonal skills. Exceptional analytical, conceptual, and problem-solving abilities Able to prioritize and execute tasks in a high-pressure environment Experience working in a team-oriented, collaborative environment 8-10 years of hand on coding experience Proficient in Java, with a good knowledge of its ecosystems Experience with writing Spark code using scala language Experience with BigData tools like Sqoop, Hive, Pig, Hue Solid understanding of object-oriented programming and HDFS concepts Familiar with various design and architectural patterns Experience with big data toolsHadoop, Spark, Kafka, fink, Hive, Sqoop etc. Experience with relational SQL and NoSQL databases like MySQL, PostgreSQL, Mongo dB and Cassandra Experience with data pipeline tools like Airflow, etc. Experience with AWS cloud servicesEC2, S3, EMR, RDS, Redshift, BigQuery Experience with stream-processing systemsStorm, Spark-Streaming, Flink etc. Experience with object-oriented/object function scripting languagesPython, Java, Scala, etc. Expertise in design / developing platform components like caching, messaging, event processing, automation, transformation and tooling frameworks check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#2B39C2;border-color:#2B39C2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> I'm interested
Posted 1 week ago
12.0 - 15.0 years
13 - 17 Lacs
Mumbai
Work from Office
Job Information Job Opening ID ZR_1688_JOB Date Opened 24/12/2022 Industry Technology Job Type Work Experience 12-15 years Job Title Big Data Architect City Mumbai Province Maharashtra Country India Postal Code 400008 Number of Positions 4 LocationMumbai, Pune, Chennai, Hyderabad, Coimbatore, Kolkata 12+ Years experience in Big data Space across Architecture, Design, Development, testing & Deployment, full understanding in SDLC. 1. Experience of Hadoop and related technology stack experience 2. Experience of the Hadoop Eco-system(HDP+CDP) / Big Data (especially HIVE) Hand on experience with programming languages such as Java/Scala/python Hand-on experience/knowledge on Spark3. Being responsible and focusing on uptime and reliable running of all or ingestion/ETL jobs4. Good SQL and used to work in a Unix/Linux environment is a must.5. Create and maintain optimal data pipeline architecture.6. Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.7. Good to have cloud experience8. Good to have experience for Hadoop integration with data visualization tools like PowerBI. check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#2B39C2;border-color:#2B39C2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> I'm interested
Posted 1 week ago
6.0 - 10.0 years
3 - 7 Lacs
Chennai
Work from Office
Job Information Job Opening ID ZR_2199_JOB Date Opened 15/04/2024 Industry Technology Job Type Work Experience 6-10 years Job Title Sr Data Engineer City Chennai Province Tamil Nadu Country India Postal Code 600004 Number of Positions 4 Strong experience in Python Good experience in Databricks Experience working in AWS/Azure Cloud Platform. Experience working with REST APIs and services, messaging and event technologies. Experience with ETL or building Data Pipeline tools Experience with streaming platforms such as Kafka. Demonstrated experience working with large and complex data sets. Ability to document data pipeline architecture and design Experience in Airflow is nice to have To build complex Deltalake check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#2B39C2;border-color:#2B39C2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> I'm interested
Posted 1 week ago
5.0 - 8.0 years
2 - 5 Lacs
Chennai
Work from Office
Job Information Job Opening ID ZR_2168_JOB Date Opened 10/04/2024 Industry Technology Job Type Work Experience 5-8 years Job Title AWS Data Engineer City Chennai Province Tamil Nadu Country India Postal Code 600002 Number of Positions 4 Mandatory Skills: AWS, Python, SQL, spark, Airflow, SnowflakeResponsibilities Create and manage cloud resources in AWS Data ingestion from different data sources which exposes data using different technologies, such asRDBMS, REST HTTP API, flat files, Streams, and Time series data based on various proprietary systems. Implement data ingestion and processing with the help of Big Data technologies Data processing/transformation using various technologies such as Spark and Cloud Services. You will need to understand your part of business logic and implement it using the language supported by the base data platform Develop automated data quality check to make sure right data enters the platform and verifying the results of the calculations Develop an infrastructure to collect, transform, combine and publish/distribute customer data. Define process improvement opportunities to optimize data collection, insights and displays. Ensure data and results are accessible, scalable, efficient, accurate, complete and flexible Identify and interpret trends and patterns from complex data sets Construct a framework utilizing data visualization tools and techniques to present consolidated analytical and actionable results to relevant stakeholders. Key participant in regular Scrum ceremonies with the agile teams Proficient at developing queries, writing reports and presenting findings Mentor junior members and bring best industry practices check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#2B39C2;border-color:#2B39C2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> I'm interested
Posted 1 week ago
5.0 - 8.0 years
2 - 6 Lacs
Bengaluru
Work from Office
Job Information Job Opening ID ZR_1628_JOB Date Opened 09/12/2022 Industry Technology Job Type Work Experience 5-8 years Job Title Data Engineer City Bangalore Province Karnataka Country India Postal Code 560001 Number of Positions 4 Roles and Responsibilities: 4+ years of experience as a data developer using Python Knowledge in Spark, PySpark preferable but not mandatory Azure Cloud experience (preferred) Alternate Cloud experience is fine preferred experience in Azure platform including Azure data Lake, data Bricks, data Factory Working Knowledge on different file formats such as JSON, Parquet, CSV, etc. Familiarity with data encryption, data masking Database experience in SQL Server is preferable preferred experience in NoSQL databases like MongoDB Team player, reliable, self-motivated, and self-disciplined check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#2B39C2;border-color:#2B39C2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> I'm interested
Posted 1 week ago
6.0 - 11.0 years
8 - 13 Lacs
Hyderabad
Work from Office
10+ years of software development experience building large scale distributed data processing systems/application, Data Engineering or large scale internet systems. Experience of at least 4 years in Developing/ Leading Big Data solution at enterprise scale with at least one end to end implementation Strong experience in programming languages Java/J2EE/Scala. Good experience in Spark/Hadoop/HDFS Architecture, YARN, Confluent Kafka , Hbase, Hive, Impala and NoSQL database. Experience with Batch Processing and AutoSys Job Scheduling and Monitoring Performance analysis, troubleshooting and resolution (this includes familiarity and investigation of Cloudera/Hadoop logs) Work with Cloudera on open issues that would result in cluster configuration changes and then implement as needed Strong experience with databases such as SQL,Hive, Elasticsearch, HBase, etc Knowledge of Hadoop Security, Data Management and Governance Primary Skills: Java/Scala, ETL, Spark, Hadoop, Hive, Impala, Sqoop, HBase, Confluent Kafka, Oracle, Linux, Git, Jenkins CI/CD
Posted 1 week ago
6.0 - 11.0 years
8 - 13 Lacs
Hyderabad
Work from Office
We are looking for a Data Engineer with experience in data warehouse projects, strong expertise in Snowflake , and hands-on knowledge of Azure Data Factory (ADF) and dbt (Data Build Tool). Proficiency in Python scripting will be an added advantage. Key Responsibilities: Design, develop, and optimize data pipelines and ETL processes for data warehousing projects. Work extensively with Snowflake, ensuring efficient data modeling, and query optimization. Develop and manage data workflows using Azure Data Factory (ADF) for seamless data integration. Implement data transformations, testing, and documentation using dbt. Collaborate with cross-functional teams to ensure data accuracy, consistency, and security. Troubleshoot data-related issues. (Optional) Utilize Python for scripting, automation, and data processing tasks. Required Skills & Qualifications: Experience in Data Warehousing with a strong understanding of best practices. Hands-on experience with Snowflake (Data Modeling, Query Optimization). Proficiency in Azure Data Factory (ADF) for data pipeline development. Strong working knowledge of dbt (Data Build Tool) for data transformations. (Optional) Experience in Python scripting for automation and data manipulation. Good understanding of SQL and query optimization techniques. Experience in cloud-based data solutions (Azure). Strong problem-solving skills and ability to work in a fast-paced environment. Experience with CI/CD pipelines for data engineering. Why Join Us Opportunity to work on cutting-edge data engineering projects. Work with a highly skilled and collaborative team. Exposure to modern cloud-based data solutions. ------ ------Developer / Software Engineer - One to Three Years,Snowflake - One to Three Years------PSP Defined SCU in Solution Architect
Posted 1 week ago
6.0 - 12.0 years
8 - 10 Lacs
Chennai
On-site
Job Description: About Us At Bank of America, we are guided by a common purpose to help make financial lives better through the power of every connection. Responsible Growth is how we run our company and how we deliver for our clients, teammates, communities, and shareholders every day. One of the keys to driving Responsible Growth is being a great place to work for our teammates around the world. We’re devoted to being a diverse and inclusive workplace for everyone. We hire individuals with a broad range of backgrounds and experiences and invest heavily in our teammates and their families by offering competitive benefits to support their physical, emotional, and financial well-being. Bank of America believes both in the importance of working together and offering flexibility to our employees. We use a multi-faceted approach for flexibility, depending on the various roles in our organization. Working at Bank of America will give you a great career with opportunities to learn, grow and make an impact, along with the power to make a difference. Join us! Global Business Services Global Business Services delivers Technology and Operations capabilities to Lines of Business and Staff Support Functions of Bank of America through a centrally managed, globally integrated delivery model and globally resilient operations. Global Business Services is recognized for flawless execution, sound risk management, operational resiliency, operational excellence and innovation. In India, we are present in five locations and operate as BA Continuum India Private Limited (BACI), a non-banking subsidiary of Bank of America Corporation and the operating company for India operations of Global Business Services. Process Overview* The Analytics and Intelligence Engine (AIE) team transforms analytical and operational data into Consumer and Wealth Client insights and enables personalization opportunities that are provided to Associate and Customer-facing operational applications. The Big data technologies used in this are Hadoop /PySpark / Scala, HQL as ETL, Unix as file Landing environment, and real time (or near real time) streaming applications. Job Description* We are actively seeking a talented and motivated Senior Hadoop Developer/ Lead to join our dynamic and energetic team. As a key contributor to our agile scrum teams, you will collaborate closely with the Insights division. We are looking for a candidate who can showcase strong technical expertise in Hadoop and related technologies, and who excels at collaborating with both onshore and offshore team members. The role requires both hands-on coding and collaboration with stakeholders to drive strategic design decisions. While functioning as an individual contributor for one or more teams, the Senior Hadoop Data Engineer may also have the opportunity to lead and take responsibility for end-to-end solution design and delivery, based on the scale of implementation and required skillsets. Responsibilities* Develop high-performance and scalable solutions for Insights, using the Big Data platform to facilitate the collection, storage, and analysis of massive data sets from multiple channels. Utilize your in-depth knowledge of Hadoop stack and storage technologies, including HDFS, Spark, Scala, MapReduce, Yarn, Hive, Sqoop, Impala, Hue, and Oozie, to design and optimize data processing workflows. Implement Near real-time and Streaming data solutions to provide up-to-date information to millions of Bank customers using Spark Streaming, Kafka. Collaborate with cross-functional teams to identify system bottlenecks, benchmark performance, and propose innovative solutions to enhance system efficiency. Take ownership of defining Big Data strategies and roadmaps for the Enterprise, aligning them with business objectives. Apply your expertise in NoSQL technologies like MongoDB, SingleStore, or HBase to efficiently handle diverse data types and storage requirements. Stay abreast of emerging technologies and industry trends related to Big Data, continuously evaluating new tools and frameworks for potential integration. Provide guidance and mentorship to junior teammates. Requirements* Education* Graduation / Post Graduation: BE/B.Tech/MCA Certifications If Any: NA Experience Range* 6 to 12 Years Foundational Skills* Minimum of 7 years of industry experience, with at least 5 years focused on hands-on work in the Big Data domain. Highly skilled in Hadoop stack technologies, such as HDFS, Spark, Hive, Yarn, Sqoop, Impala and Hue. Strong proficiency in programming languages such as Python, Scala, and Bash/Shell Scripting. Excellent problem-solving abilities and the capability to deliver effective solutions for business-critical applications. Strong command of Visual Analytics Tools, with a focus on Tableau. Desired Skills* Experience in Real-time streaming technologies like Spark Streaming, Kafka, Flink, or Storm. Proficiency in NoSQL technologies like HBase, MongoDB, SingleStore, etc. Familiarity with Cloud Technologies such as Azure, AWS, or GCP. Working knowledge of machine learning algorithms, statistical analysis, and programming languages (Python or R) to conduct data analysis and develop predictive models to uncover valuable patterns and trends. Proficiency in Data Integration and Data Security within the Hadoop ecosystem, including knowledge of Kerberos. Work Timings* 12:00 PM to 09.00 PM IST. Job Location* Chennai, Mumbai
Posted 1 week ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Description The Lead Data Engineer will provide technical expertise in analysis, design, development, rollout and maintenance of data integration initiatives. This role will contribute to implementation methodologies and best practices, as well as work on project teams to analyze, design, develop and deploy business intelligence / data integration solutions to support a variety of customer needs. This position oversees a team of Data Integration Consultants at various levels, ensuring their success on projects, goals, trainings and initiatives though mentoring and coaching. Provides technical expertise in needs identification, data modelling, data movement and transformation mapping (source to target), automation and testing strategies, translating business needs into technical solutions with adherence to established data guidelines and approaches from a business unit or project perspective whilst leveraging best fit technologies (e.g., cloud, Hadoop, NoSQL, etc.) and approaches to address business and environmental challenges Works with stakeholders to identify and define self-service analytic solutions, dashboards, actionable enterprise business intelligence reports and business intelligence best practices. Responsible for repeatable, lean and maintainable enterprise BI design across organizations. Effectively partners with client team. Leadership not only in the conventional sense, but also within a team we expect people to be leaders. Candidate should elicit leadership qualities such as Innovation, Critical thinking, optimism/positivity, Communication, Time Management, Collaboration, Problem-solving, Acting Independently, Knowledge sharing and : Design, develop, test, and deploy data integration processes (batch or real-time) using tools such as Microsoft SSIS, Azure Data Factory, Databricks, Matillion, Airflow, Sqoop, etc. Create functional & technical documentation e.g. ETL architecture documentation, unit testing plans and results, data integration specifications, data testing plans, etc. Provide a consultative approach with business users, asking questions to understand the business need and deriving the data flow, conceptual, logical, and physical data models based on those needs. Perform data analysis to validate data models and to confirm ability to meet business needs. May serve as project or DI lead, overseeing multiple consultants from Stays current with emerging and changing technologies to best recommend and implement beneficial technologies and approaches for Data Integration Ensures proper execution/creation of methodology, training, templates, resource plans and engagement review processes Coach team members to ensure understanding on projects and tasks, providing effective feedback (critical and positive) and promoting growth opportunities when appropriate. Coordinate and consult with the project manager, client business staff, client technical staff and project developers in data architecture best practices and anything else that is data related at the project or business unit levels Architect,design, develop and set direction for enterprise self-service analytic solutions, business intelligence reports, visualisations and best practice standards. Toolsets include but not limited to : SQL Server Analysis and Reporting Services, Microsoft Power BI, Tableau and Qlik. Work with report team to identify, design and implement a reporting user experience that is consistent and intuitive across environments, across report methods, defines security and meets usability and scalability have : Writing code in programming language & working experience in Python, Pyspark, Databricks, Scala or Similar Data Pipeline Development & Management Design, develop, and maintain ETL (Extract, Transform, Load) pipelines using AWS services like AWS Glue, AWS Data Pipeline, Lambda, and Step Functions. Implement incremental data processing using tools like Apache Spark (EMR), Kinesis, and Kafka. Work with AWS data storage solutions such as Amazon S3, Redshift, RDS, DynamoDB, and Aurora. Optimize data partitioning, compression, and indexing for efficient querying and cost optimization. Implement data lake architecture using AWS Lake Formation & Glue Catalog. Implement CI/CD pipelines for data workflows using Code Pipeline, Code Build, and GitHub to have : Enterprise Data Modelling and Semantic Modelling & working experience in ERwin, ER/Studio, PowerDesigner or Similar Logical/Physical model on Big Data sets or modern data warehouse & working experience in ERwin, ER/Studio, PowerDesigner or Similar Agile Process (Scrum cadences, Roles, deliverables) & basic understanding in either Azure DevOps, JIRA or Similar. (ref:hirist.tech) Show more Show less
Posted 1 week ago
3.0 - 8.0 years
5 - 10 Lacs
Hyderabad
Work from Office
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : PySpark Good to have skills : NA Minimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. You will oversee the development process and ensure successful project delivery. Roles & Responsibilities: Expected to perform independently and become an SME. Required active participation/contribution in team discussions. Contribute in providing solutions to work-related problems. Lead the application development team in designing and building applications. Act as the primary point of contact for project stakeholders. Provide technical guidance and mentorship to team members. Ensure timely delivery of high-quality software solutions. Collaborate with cross-functional teams to drive project success. Professional & Technical Skills: Must To Have Skills:Proficiency in PySpark. Strong understanding of big data processing and analysis. Experience with distributed computing frameworks. Hands-on experience in building scalable data pipelines. Knowledge of cloud platforms for data processing. Additional Information: The candidate should have a minimum of 3 years of experience in PySpark. This position is based at our Hyderabad office. A 15 years full-time education is required. Qualifications 15 years full time education
Posted 1 week ago
5.0 - 7.0 years
13 - 17 Lacs
Bengaluru
Work from Office
A career in IBM Consulting is rooted by long-term relationships and close collaboration with clients across the globe. You'll work with visionaries across multiple industries to improve the hybrid cloud and Al journey for the most innovative and valuable companies in the world. Your ability to accelerate impact and make meaningful change for your clients is enabled by our strategic partner ecosystem and our robust technology platforms across the IBM portfolio; including Software and Red Hat. In your role, you will be responsible forSkilled Multiple GCP services - GCS, BigQuery, Cloud SQL, Dataflow, Pub/Sub, Cloud Run, Workflow, Composer, Error reporting, Log explorer etc. Must have Python and SQL work experience & Proactive, collaborative and ability to respond to critical situation Ability to analyse data for functional business requirements & front face customer Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise 5 to 7 years of relevant experience working as technical analyst with Big Query on GCP platform. Skilled in multiple GCP services - GCS, Cloud SQL, Dataflow, Pub/Sub, Cloud Run, Workflow, Composer, Error reporting, Log explorer Ambitious individual who can work under their own direction towards agreed targets/goals and with creative approach to work You love collaborative environments that use agile methodologies to encourage creative design thinking and find innovative ways to develop with cutting edge technologies. End to End functional knowledge of the data pipeline/transformation implementation that the candidate has done, should understand the purpose/KPIs for which data transformation was done Preferred technical and professional experience Experience with AEM Core Technologies OSGI Services, Apache Sling ,Granite Framework., Java Content Repository API, Java 8+, Localization Familiarity with building tools, Jenkin and Maven , Knowledge of version control tools, especially Git, Knowledge of Patterns and Good Practices to design and develop quality and clean code, Knowledge of HTML, CSS, and JavaScript , jQuery Familiarity with task management, bug tracking, and collaboration tools like JIRA and Confluence
Posted 1 week ago
2.0 - 6.0 years
12 - 16 Lacs
Bengaluru
Work from Office
As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and AWS Cloud Data Platform Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise AWS Data Vault 2.0 development mechanism for agile data ingestion, storage and scaling Databricks for complex queries on transformation, aggregation, business logic implementation aggregation, business logic implementation AWS Redshift and Redshift spectrum, for complex queries on transformation, aggregation, business logic implementation DWH Concept on star schema, Materialize view concept. Strong SQL and data manipulation/transformation skills Preferred technical and professional experience Robust and Scalable Cloud Infrastructure End-to-End Data Engineering Pipeline Versatile Programming Capabilities
Posted 1 week ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Associate Job Description & Summary At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In business intelligence at PwC, you will focus on leveraging data and analytics to provide strategic insights and drive informed decision-making for clients. You will develop and implement innovative solutions to optimise business performance and enhance competitive advantage. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Job Description & Summary: A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. Responsibilities Scala, Java, spark (Spark Streaming, MLib), Kafka or equivalent Cloud Bigdata components, , SQL,PostgreSQL , t-sql/pl-sql, Hadoop ( airflow, oozie, hdfs, Sqoop, Hive, Pig, Map Reduce),Shell Scripting, Cloud technologies GCP preferable Mandatory Skill Sets Scala, Spark, GCP Preferred Skill Sets Scala, Spark, GCP Years Of Experience Required 4 - 8 Education Qualification B.Tech / M.Tech / MBA / MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Engineering, Master of Business Administration, Master of Engineering Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills Good Clinical Practice (GCP) Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Business Case Development, Business Data Analytics, Business Intelligence and Reporting Tools (BIRT), Business Intelligence Development Studio, Communication, Competitive Advantage, Continuous Process Improvement, Data Analysis and Interpretation, Data Architecture, Database Management System (DBMS), Data Collection, Data Pipeline, Data Quality, Data Science, Data Visualization, Emotional Regulation, Empathy, Inclusion, Industry Trend Analysis, Intellectual Curiosity, Java (Programming Language), Market Development {+ 7 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date Show more Show less
Posted 1 week ago
7.0 years
0 Lacs
Ahmedabad, Gujarat, India
Remote
At Charles River, we are passionate about improving the quality of people’s lives. When you join our global family, you will help create healthier lives for millions of patients and their families. Charles River employees are innovative thinkers, who are dedicated to continuous learning and improvement. We will empower you with the resources you need to grow and develop in your career. As a Charles River employee, you will be part of an industry-leading, customer-focused company at the forefront of drug development. Your skills will play a key role in bringing life-saving therapies to market faster through simpler, quicker, and more digitalized processes. Whether you are in lab operations, finance, IT, sales, or another area, when you work at Charles River, you will be the difference every day for patients across the globe. Job Summary There has never been a more exciting time to be part of the Enterprise Data Analytics team at Charles River Labs. We are on a mission to position data as the core driver of our business, empowering leaders to make informed, data-driven decisions that accelerate revenue, enhance productivity, and keep us ahead of the competition. Our recently launched Enterprise Data Hub serves as the company's digital backbone, and we are looking for visionary people in data analytics to help us further expand and refine this hub. Your role will be key in integrating, mastering, and ensuring the quality of our data across all business functions, ultimately transforming how Charles River operates through data science and advanced analytics. You will be joining a team that is deeply committed to our purpose: Together We Create Healthier Lives. This unwavering focus on patients makes our global technology team uniquely inspiring. As we look to the future, we reimagine how we do business through our Digital Journey. This journey is central to advancing our position in the market, unlocking new growth opportunities, and positioning us as a leading, digitally powered Contract Research Organization (CRO) that enables our clients to deliver innovative, safe, and effective treatments to patients faster and more efficiently than ever before. Note: It’s a fully remote home-based role for professionally qualified and experienced candidates based in India, who are willing and open to work UK shifts . Essential Qualifications Bachelor’s degree in computer engineering, Computer Science, or a related discipline (Master’s degree preferred) 7+ years of experience in ETL design, development, and performance tuning using the Microsoft BI Stack in a multi-dimensional data warehousing environment. 7+ years of advanced SQL programming expertise (PL/SQL, T-SQL) 5+ years of experience in Enterprise Data & Analytics solution architecture 3+ years of experience in Python Programming 3+ years of hands-on experience with Azure, especially for data-heavy/analytics applications leveraging relational and NoSQL databases, Data Warehousing, and Big Data solutions. 3+ years of experience with key Azure services: Azure Data Factory, Data Lake Gen2, Analysis Services, Databricks, Blob Storage, SQL Database, Cosmos DB, App Service, Logic Apps, and Functions 2+ years of experience designing data models aligning with business requirements and analytics needs. Preferred Skills 2+ years of experience with Big Data technologies, such as Hadoop, Sqoop, Hive, Kafka, Spark, Pyspark, Python, Scala, or Pig 2+ years of experience managing both relational and non-relational data using Big Data Management (BDM) techniques (formats like JSON, XML, Avro, Parquet, etc.) 2+ years of experience setting up and operating data pipelines using Python or SQL Familiarity with DevOps processes (CI/CD) and infrastructure as code Knowledge of Master Data Management (MDM) and Data Quality tools Experience developing REST APIs using Java Spring Boot Familiarity with stream-processing systems (e.g., Event Hubs, Storm, Spark-Streaming) Experience with API integrations (RESTful, SOAP) for both internal and external systems to enhance data flow and automation. Experience in data and analytics within the Life Sciences industry is a plus. About Corporate Functions The Corporate Functions provide operational support across Charles River in areas such as Human Resources, Finance, IT, Legal, Sales, Quality Assurance, Marketing, and Corporate Development. They partner with their colleagues across the company to develop and drive strategies and to set global standards. The functions are essential to providing a bridge between strategic vision and operational readiness, to ensure ongoing functional innovation and capability improvement. About Charles River Charles River is an early-stage contract research organization (CRO). We have built upon our foundation of laboratory animal medicine and science to develop a diverse portfolio of discovery and safety assessment services, both Good Laboratory Practice (GLP) and non-GLP, to support clients from target identification through preclinical development. Charles River also provides a suite of products and services to support our clients’ clinical laboratory testing needs and manufacturing activities. Utilizing this broad portfolio of products and services enables our clients to create a more flexible drug development model, which reduces their costs, enhances their productivity and effectiveness to increase speed to market. With over 20,000 employees within 110 facilities in 20 countries around the globe, we are strategically positioned to coordinate worldwide resources and apply multidisciplinary perspectives in resolving our client’s unique challenges. Our client base includes global pharmaceutical companies, biotechnology companies, government agencies and hospitals and academic institutions around the world. At Charles River, we are passionate about our role in improving the quality of people’s lives. Our mission, our excellent science and our strong sense of purpose guide us in all that we do, and we approach each day with the knowledge that our work helps to improve the health and well-being of many across the globe. We have proudly worked on 80% of the drugs approved by the U.S. Food and Drug Administration (FDA) in the past five years. Equal Employment Opportunity Charles River Laboratories is an Equal Opportunity Employer - M/F/Disabled/Vet. If you are interested in applying to Charles River Laboratories and need special assistance or an accommodation due to a disability to complete any forms or to otherwise participate in the resume submission process, please contact a member of our Human Resources team by sending an e-mail message to crrecruitment_US@crl.com. This contact is for accommodation requests for individuals with disabilities only and cannot be used to inquire about the status of applications. For more information, please visit www.criver.com. 226600 Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Ahmedabad, Gujarat, India
Remote
At Charles River, we are passionate about improving the quality of people’s lives. When you join our global family, you will help create healthier lives for millions of patients and their families. Charles River employees are innovative thinkers, who are dedicated to continuous learning and improvement. We will empower you with the resources you need to grow and develop in your career. As a Charles River employee, you will be part of an industry-leading, customer-focused company at the forefront of drug development. Your skills will play a key role in bringing life-saving therapies to market faster through simpler, quicker, and more digitalized processes. Whether you are in lab operations, finance, IT, sales, or another area, when you work at Charles River, you will be the difference every day for patients across the globe. Job Summary There has never been a more exciting time to be part of the Enterprise Data Analytics team at Charles River Labs. We are on a mission to position data as the core driver of our business, empowering leaders to make informed, data-driven decisions that accelerate revenue, enhance productivity, and keep us ahead of the competition. Our recently launched Enterprise Data Hub serves as the company's digital backbone, and we are looking for visionary people in data analytics to help us further expand and refine this hub. Your role will be key in integrating, mastering, and ensuring the quality of our data across all business functions, ultimately transforming how Charles River operates through data science and advanced analytics. You will be joining a team that is deeply committed to our purpose: Together We Create Healthier Lives. This unwavering focus on patients makes our global technology team uniquely inspiring. As we look to the future, we reimagine how we do business through our Digital Journey. Note: It’s a fully remote home-based role for professionally qualified and experienced candidates based in India, who are willing and open to work UK shifts. Essential Qualifications Bachelor’s degree in computer engineering, Computer Science, or a related discipline (Master’s degree preferred) 3+ years of experience in ETL design, development, and performance tuning using the Microsoft BI Stack in a multi-dimensional data warehousing environment. 3+ years of advanced SQL programming expertise (PL/SQL, T-SQL) 1+ years of experience in Enterprise Data & Analytics solution architecture 1+ years of experience in Python Programming 1+ years of hands-on experience with Azure, especially for data-heavy/analytics applications leveraging relational and NoSQL databases, Data Warehousing, and Big Data solutions. 1+ years of experience with key Azure services: Azure Data Factory, Data Lake Gen2, Analysis Services, Databricks, Blob Storage, SQL Database, Cosmos DB, App Service, Logic Apps, and Functions. Preferred Skills Experience with Big Data technologies, such as Hadoop, Sqoop, Hive, Kafka, Spark, Pyspark, Python, Scala, or Pig Experience managing both relational and non-relational data using Big Data Management (BDM) techniques (formats like JSON, XML, Avro, Parquet, etc.) Experience setting up and operating data pipelines using Python or SQL Familiarity with DevOps processes (CI/CD) and infrastructure as code Knowledge of Master Data Management (MDM) and Data Quality tools Experience developing REST APIs using Java Spring Boot Familiarity with stream-processing systems (e.g., Event Hubs, Storm, Spark-Streaming) Experience in data and analytics within the Life Sciences industry is a plus. About Corporate Functions The Corporate Functions provide operational support across Charles River in areas such as Human Resources, Finance, IT, Legal, Sales, Quality Assurance, Marketing, and Corporate Development. They partner with their colleagues across the company to develop and drive strategies and to set global standards. The functions are essential to providing a bridge between strategic vision and operational readiness, to ensure ongoing functional innovation and capability improvement. About Charles River Charles River is an early-stage contract research organization (CRO). We have built upon our foundation of laboratory animal medicine and science to develop a diverse portfolio of discovery and safety assessment services, both Good Laboratory Practice (GLP) and non-GLP, to support clients from target identification through preclinical development. Charles River also provides a suite of products and services to support our clients’ clinical laboratory testing needs and manufacturing activities. Utilizing this broad portfolio of products and services enables our clients to create a more flexible drug development model, which reduces their costs, enhances their productivity and effectiveness to increase speed to market. With over 20,000 employees within 110 facilities in 20 countries around the globe, we are strategically positioned to coordinate worldwide resources and apply multidisciplinary perspectives in resolving our client’s unique challenges. Our client base includes global pharmaceutical companies, biotechnology companies, government agencies and hospitals and academic institutions around the world. At Charles River, we are passionate about our role in improving the quality of people’s lives. Our mission, our excellent science and our strong sense of purpose guide us in all that we do, and we approach each day with the knowledge that our work helps to improve the health and well-being of many across the globe. We have proudly worked on 80% of the drugs approved by the U.S. Food and Drug Administration (FDA) in the past five years. Equal Employment Opportunity Charles River Laboratories is an Equal Opportunity Employer - M/F/Disabled/Vet. If you are interested in applying to Charles River Laboratories and need special assistance or an accommodation due to a disability to complete any forms or to otherwise participate in the resume submission process, please contact a member of our Human Resources team by sending an e-mail message to crrecruitment_US@crl.com. This contact is for accommodation requests for individuals with disabilities only and cannot be used to inquire about the status of applications. For more information, please visit www.criver.com. 226601 Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Hi, Greeting from Alp ... We have an job opening with one of our Leading MNC client. Please find the JD below Request you to please share your update profile, so we can connect/ discuss and take it forward. Please send your profile to: Priyanka.g@alpconsulting.in Job Description* Interview Mode: Face to Face Interview Date: 28-June-25; Saturday Location: Chennai Role: Data Software Engineer Skill: Bigdata/Hadoop +Python +Spark Job Description: 5-12 Years of in Big Data & Data related technology experience Expert level understanding of distributed computing principles Expert level knowledge and experience in Apache Spark ( Must) Hands on programming with Python ( Must) Proficiency with Hadoop v2, Map Reduce, HDFS, Sqoop Thanks & Regards, Priyanka.G E : Priyanka.g@alpconsulting.in Show more Show less
Posted 1 week ago
10.0 - 15.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Title: Lead Data Engineer Location- All EXL Locations Experience- 10 to 15 Years Job Summary The Lead Data Engineer will provide technical expertise in analysis, design, development, rollout and maintenance of data integration initiatives. This role will contribute to implementation methodologies and best practices, as well as work on project teams to analyze, design, develop and deploy business intelligence / data integration solutions to support a variety of customer needs. This position oversees a team of Data Integration Consultants at various levels, ensuring their success on projects, goals, trainings and initiatives though mentoring and coaching. Provides technical expertise in needs identification, data modelling, data movement and transformation mapping (source to target), automation and testing strategies, translating business needs into technical solutions with adherence to established data guidelines and approaches from a business unit or project perspective whilst leveraging best fit technologies (e.g., cloud, Hadoop, NoSQL, etc.) and approaches to address business and environmental challenges Works with stakeholders to identify and define self-service analytic solutions, dashboards, actionable enterprise business intelligence reports and business intelligence best practices. Responsible for repeatable, lean and maintainable enterprise BI design across organizations. Effectively partners with client team. Leadership not only in the conventional sense, but also within a team we expect people to be leaders. Candidate should elicit leadership qualities such as Innovation, Critical thinking, optimism/positivity, Communication, Time Management, Collaboration, Problem-solving, Acting Independently, Knowledge sharing and Approachable. Responsibilities: Design, develop, test, and deploy data integration processes (batch or real-time) using tools such as Microsoft SSIS, Azure Data Factory , Databricks, Matillion, Airflow, Sqoop, etc. Create functional & technical documentation – e.g. ETL architecture documentation, unit testing plans and results, data integration specifications, data testing plans, etc. Provide a consultative approach with business users, asking questions to understand the business need and deriving the data flow, conceptual, logical, and physical data models based on those needs. Perform data analysis to validate data models and to confirm ability to meet business needs. May serve as project or DI lead, overseeing multiple consultants from various competencies Stays current with emerging and changing technologies to best recommend and implement beneficial technologies and approaches for Data Integration Ensures proper execution/creation of methodology, training, templates, resource plans and engagement review processes Coach team members to ensure understanding on projects and tasks, providing effective feedback (critical and positive) and promoting growth opportunities when appropriate. Coordinate and consult with the project manager, client business staff, client technical staff and project developers in data architecture best practices and anything else that is data related at the project or business unit levels Architect, design, develop and set direction for enterprise self-service analytic solutions, business intelligence reports, visualisations and best practice standards. Toolsets include but not limited to: SQL Server Analysis and Reporting Services, Microsoft Power BI, Tableau and Qlik. Work with report team to identify, design and implement a reporting user experience that is consistent and intuitive across environments, across report methods, defines security and meets usability and scalability best practices. Must have: Writing code in programming language & working experience in Python, Pyspark, Databricks, Scala or Similar Data Pipeline Development & Management Design, develop, and maintain ETL (Extract, Transform, Load) pipelines using AWS services like AWS Glue, AWS Data Pipeline, Lambda, and Step Functions . Implement incremental data processing using tools like Apache Spark (EMR), Kinesis, and Kafka. Work with AWS data storage solutions such as Amazon S3, Redshift, RDS, DynamoDB, and Aurora. Optimize data partitioning, compression, and indexing for efficient querying and cost optimization. Implement data lake architecture using AWS Lake Formation & Glue Catalog. Implement CI/CD pipelines for data workflows using Code Pipeline, Code Build, and GitHub Actions Good to have: Enterprise Data Modelling and Semantic Modelling & working experience in ERwin, ER/Studio, PowerDesigner or Similar Logical/Physical model on Big Data sets or modern data warehouse & working experience in ERwin, ER/Studio, PowerDesigner or Similar Agile Process (Scrum cadences, Roles, deliverables) & basic understanding in either Azure DevOps, JIRA or Similar. Key skills: key Skills: Python, Pyspark, AWS, Databricks, SQL. Show more Show less
Posted 1 week ago
5.0 - 12.0 years
0 Lacs
Coimbatore, Tamil Nadu, India
On-site
We have an opportunity with one of our clients and below is the detailed job description: Experience Range - 5 to 12 Yrs Job Location - Coimbatore & Chennai Event Date - 14-Jun-25 | Face to Face | Coimbatore Interested candidates must be available for the event on 14-June-25 Job Description: 1. 5-12 Years of in Big Data & Data related technology experience 2. Expert level understanding of distributed computing principles 3. Expert level knowledge and experience in Apache Spark 4. Hands on programming with Python 5. Proficiency with Hadoop v2, Map Reduce, HDFS, Sqoop 6. Experience with building stream-processing systems, using technologies such as Apache Storm or Spark-Streaming 7. Experience with messaging systems, such as Kafka or RabbitMQ 8. Good understanding of Big Data querying tools, such as Hive, and Impala 9. Experience with integration of data from multiple data sources such as RDBMS (SQL Server, Oracle), ERP, Files 10. Good understanding of SQL queries, joins, stored procedures, relational schemas 11. Experience with NoSQL databases, such as HBase, Cassandra, MongoDB 12. Knowledge of ETL techniques and frameworks 13. Performance tuning of Spark Jobs 14. Experience with native Cloud data services AWS or AZURE Databricks or GCP 15. Ability to lead a team efficiently 16. Experience with designing and implementing Big data solutions 17. Practitioner of AGILE methodology WE OFFER 1. Opportunity to work on technical challenges that may impact across geographies 2. Vast opportunities for self-development: online university, knowledge sharing opportunities globally, learning opportunities through external certifications 3. Opportunity to share your ideas on international platforms 4. Sponsored Tech Talks & Hackathons 5. Possibility to relocate to any EPAM office for short and long-term projects 6. Focused individual development 7. Benefit package: · Health benefits, Medical Benefits · Retirement benefits · Paid time off · Flexible benefits 1. Forums to explore beyond work passion (CSR, photography, painting, sports, etc.) Show more Show less
Posted 1 week ago
5.0 - 9.0 years
0 - 3 Lacs
Hyderabad, Pune, Chennai
Work from Office
Position : Azure Data Engineer Locations : Bangalore, Pune, Hyderabad, Chennai & Coimbatore Key skills Azure Data bricks, Azure Data Factory, Hadoop Relevant Exp : ADF, ADLF, Databricks- 4 Yrs Only Hadoop- 3.5 or 3 Yrs Experience - 5 Years Must-have skills: Cloud certified in one of these categories • Azure Data Engineer • Azure Data Factory , Azure Data bricks Spark (PySpark or scala), SQL, DATA Ingestion, Curation . Semantic Modelling/ Optimization of data model to work within Rahona • Experience in Azure ingestion from on-prem source, e.g. mainframe, SQL server, Oracle. • Experience in Sqoop / Hadoop • Microsoft Excel (for metadata files with requirements for ingestion) • Any other certificate in Azure/AWS/GCP and data engineering hands-on experience in cloud • Strong Programming skills with at least one of Python, Scala or Java • Strong SQL skills ( T-SQL or PL-SQL) • Data files movement via mailbox • Source-code versioning/promotion tools, e.g. Git/Jenkins • Orchestration tools, e.g. Autosys, Oozie • Source-code versioning with Git. Nice-to-have skills: Experience working with mainframe files • Experience in Agile environment, JIRA/Confluence tools.
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
India has seen a rise in demand for professionals skilled in Sqoop, a tool designed for efficiently transferring bulk data between Apache Hadoop and structured datastores such as relational databases. Job seekers with expertise in Sqoop can explore various opportunities in the Indian job market.
The average salary range for Sqoop professionals in India varies based on experience levels: - Entry-level: Rs. 3-5 lakhs per annum - Mid-level: Rs. 6-10 lakhs per annum - Experienced: Rs. 12-20 lakhs per annum
Typically, a career in Sqoop progresses as follows: 1. Junior Developer 2. Sqoop Developer 3. Senior Developer 4. Tech Lead
In addition to expertise in Sqoop, professionals in this field are often expected to have knowledge of: - Apache Hadoop - SQL - Data warehousing concepts - ETL tools
As you explore job opportunities in the field of Sqoop in India, make sure to prepare thoroughly and showcase your skills confidently during interviews. Stay updated with the latest trends and advancements in Sqoop to enhance your career prospects. Good luck with your job search!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2