Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
1.0 - 5.0 years
7 - 8 Lacs
Bengaluru
Work from Office
Diverse Lynx is looking for Snowflake Developer to join our dynamic team and embark on a rewarding career journeyA Developer is responsible for designing, developing, and maintaining software applications and systems. They collaborate with a team of software developers, designers, and stakeholders to create software solutions that meet the needs of the business.Key responsibilities:Design, code, test, and debug software applications and systemsCollaborate with cross-functional teams to identify and resolve software issuesWrite clean, efficient, and well-documented codeStay current with emerging technologies and industry trendsParticipate in code reviews to ensure code quality and adherence to coding standardsParticipate in the full software development life cycle, from requirement gathering to deploymentProvide technical support and troubleshooting for production issues.Requirements:Strong programming skills in one or more programming languages, such as Python, Java, C++, or JavaScriptExperience with software development tools, such as version control systems (e.g. Git), integrated development environments (IDEs), and debugging toolsFamiliarity with software design patterns and best practicesGood communication and collaboration skills.
Posted 2 days ago
5.0 years
0 Lacs
India
Remote
Job Title: Data Analytics and Business Intelligence Engineer (Databricks Specialist) Location: Remote Timings : 6.30PM IST -3.30 AM IST Job Type: Full-Time Job Summary: We are seeking a highly skilled and analytical Data Analytics and Business Intelligence (BI) Engineer with strong experience in Databricks to join our data team. The ideal candidate will be responsible for designing, developing, and maintaining scalable data pipelines, dashboards, and analytics solutions that drive business insights and decision-making. Key Responsibilities: Design and implement robust data pipelines using Databricks , Apache Spark , and Delta Lake . Develop and maintain ETL/ELT workflows to ingest, transform, and store large volumes of structured and unstructured data. Build and optimize data models and data marts to support self-service BI and advanced analytics. Create interactive dashboards and reports using tools like Power BI , Tableau , or Looker . Collaborate with data scientists, analysts, and business stakeholders to understand data needs and deliver actionable insights. Ensure data quality, integrity, and governance across all analytics solutions. Monitor and improve the performance of data pipelines and BI tools. Stay current with emerging technologies and best practices in data engineering and analytics. Required Qualifications: Bachelor’s or Master’s degree in Computer Science, Data Science, Engineering, or a related field. 5+ years of experience in data engineering, analytics, or BI development. Strong proficiency in Databricks , Apache Spark , and SQL . Experience with cloud platforms such as Azure , AWS , or GCP . Proficiency in Python or Scala for data processing. Hands-on experience with data visualization tools (Power BI, Tableau, etc.). Solid understanding of data warehousing concepts , dimensional modeling , and data lakes . Familiarity with CI/CD pipelines , version control (Git) , and Agile methodologies . Preferred Qualifications: Databricks certification (e.g., Databricks Certified Data Engineer Associate/Professional ). Experience with MLflow , Delta Live Tables , or Unity Catalog . Knowledge of data governance , security , and compliance standards. Strong communication and stakeholder management skills. Show more Show less
Posted 2 days ago
6.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
SDET Engineer – Backend and Data-Driven Applications Location: Bangalore About US FICO, originally known as Fair Isaac Corporation, is a leading analytics and decision management company that empowers businesses and individuals around the world with data-driven insights. Known for pioneering the FICO® Score, a standard in consumer credit risk assessment, FICO combines advanced analytics, machine learning, and sophisticated algorithms to drive smarter, faster decisions across industries. From financial services to retail, insurance, and healthcare, FICO's innovative solutions help organizations make precise decisions, reduce risk, and enhance customer experiences. With a strong commitment to ethical use of AI and data, FICO is dedicated to improving financial access and inclusivity, fostering trust, and driving growth for a digitally evolving world. The Opportunity “As an SDET engineer on our Generative AI team, you will work at the frontier of language model applications, developing novel solutions for various areas of the FICO platform to include fraud investigation, decision automation, process flow automation, and optimization. We seek a highly skilled engineer with a strong foundation in digital product development, a zeal for innovation and responsible for deploying product updates, identifying production issues and implementing integrations. The SDET should thrive in agile, fast-paced environments, champion test automation and CI/CD quality gates, and consistently deliver robust, customer-centric validation for data-driven and backend systems. You will have the opportunity to make a meaningful impact on FICO’s platform by infusing it with next-generation AI capabilities. You’ll work with a team, leveraging skills to build solutions and drive innovation forward.”. What You’ll Contribute Design and implement robust test plans and strategies to validate APIs, backend services, and data-intensive workflows across ML and GenAI product stacks. Perform hands-on testing (manual and automated) across functional, regression, usability, and performance layers — including both black-box and grey-box testing techniques. Build and maintain automation frameworks for both API and UI layers, enabling continuous and reliable validation across environments. Collaborate closely with Data Engineers, Backend Engineers, and MLOps teams to test ETL pipelines, data transformations, and model deployment workflows. Write and execute automated tests using tools such as Selenium, RestAssured, Pytest, or Postman to validate both synchronous and asynchronous system behaviors. Execute complex SQL queries and data validations across RDBMS and NoSQL stores to ensure data accuracy and integrity in production-like environments. Integrate tests with CI/CD pipelines (e.g., GitHub Actions, Jenkins, Argo Workflows), and enable shift-left testing practices as part of the engineering workflow. Evaluate test results, identify root causes, and log issues in defect tracking tools such as JIRA; drive continuous quality improvements and regression stability. Partner with QA leadership and development teams to assess test coverage, identify quality gaps, and champion testability and observability as core design principles. Participate in release planning, sprint ceremonies, and provide quality signals and product readiness throughout the SDLC lifecycle What We’re Seeking 6+ years of experience in software quality engineering, preferably with experience validating backend and data-heavy systems. Deep understanding of QA methodologies, software testing life cycle, and test automation design patterns. Proficient in Java or Python for test automation and scripting. Hands-on experience building automation frameworks for REST APIs, Web Services, and microservices. Strong SQL skills and experience validating data pipelines, relational and NoSQL databases. Familiarity with cloud platforms (AWS preferred), containerization (Docker), and CI/CD tools like GitHub Actions or Jenkins. Solid understanding of Agile and Scrum methodologies; experience working in fast-paced, iterative development cycles. Proficiency with test management and defect tracking tools (e.g., JIRA, QTest, TestRail, Quality Center). Strong debugging and triaging skills, with a knack for identifying edge cases and performance bottlenecks. Strong communication, problem-solving, and collaboration skills, particularly in cross-functional teams including backend, ML, and DevOps stakeholders. Excellent collaboration and communication skills, with a proven ability to work effectively in cross-functional, globally distributed teams. A bachelor’s degree in Computer Science, Engineering, or a related discipline, or equivalent hands-on industry experience. Our Offer to You An inclusive culture strongly reflecting our core values: Act Like an Owner, Delight Our Customers and Earn the Respect of Others. The opportunity to make an impact and develop professionally by leveraging your unique strengths and participating in valuable learning experiences. Highly competitive compensation, benefits and rewards programs that encourage you to bring your best every day and be recognized for doing so. An engaging, people-first work environment offering work/life balance, employee resource groups, and social events to promote interaction and camaraderie. Show more Show less
Posted 2 days ago
5.0 - 9.0 years
15 - 22 Lacs
Hyderabad
Hybrid
Job Description We are seeking a skilled Informatica ETL Developer with 5-6 years of experience in ETL and Business Intelligence projects. The ideal candidate will have a strong background in Informatica PowerCenter , a solid understanding of data warehousing concepts , and hands-on experience in SQL, performance tuning , and production support . This role involves designing and maintaining robust ETL pipelines to support digital transformation initiatives for clients in manufacturing, automotive, transportation, and engineering domains. Key Responsibilities: Design, develop, and maintain ETL workflows using Informatica PowerCenter . Troubleshoot and optimize ETL jobs for performance and reliability. Analyze complex data sets and write advanced SQL queries for data validation and transformation. Collaborate with data architects and business analysts to implement data warehousing solutions . Apply SDLC methodologies throughout the ETL development lifecycle. Support production environments by identifying and resolving data and performance issues. Work with Unix shell scripting for job automation and scheduling. Contribute to the design of technical architectures that support digital transformation. Required Skills: 35 years of hands-on experience with Informatica PowerCenter . Proficiency in SQL and familiarity with NoSQL platforms . Experience in ETL performance tuning and troubleshooting . Solid understanding of Unix/Linux environments and scripting. Excellent verbal and written communication skills. Preferred Qualifications: AWS Certification or experience with cloud-based data integration is a plus. Exposure to data modeling and data governance practices.
Posted 2 days ago
7.0 - 10.0 years
9 - 12 Lacs
Mumbai
Work from Office
Experience 7-10 years Educational qualification - bachelors degree or higher in a related field Summary Applies the principles of software engineering to design, develop, maintain, test, and evaluate computer software that provide business capabilities, solutions, and/or product suites. Provides systems life cycle management (e.g., analyses, technical requirements, design, coding, testing, implementation of systems and applications software, etc.) to ensure delivery of technical solutions is on time and within budget. Researches and supports the integration of emerging technologies. Provides knowledge and support for applications development, integration, and maintenance. Develops program logic for new applications or analyzes and modifies logic in existing applications. Analyzes requirements, tests, and integrates application components. Ensures that system improvements are successfully implemented. May focus on web/internet applications specifically, using a variety of languages and platforms. Defines application complexity drivers, estimates development efforts, creates milestones and/or timelines, and tracks progress towards completion. Application Development/Programming Identifies areas for improvement and develops innovative enhancements using available software development tools following design requirements of customer. System and Technology Integration Interprets internal/external business challenges and recommends integration of the appropriate systems, applications and technology to provide a fully functional solution to a business problem. Development and support of the activities outlined below (Note: Other items may arise that are not directly referenced in this scope that may include technology updates, technology expansion, DevOps pipeline changes, information security, and technical debt compliance) New Development Development of new features/functionality driven by PI (program Increment). This will include documenting Features, Stories, obtaining Approvals from Business and UPS IT Product Owners, story analysis, design of the required solution, review with UPS SME(s), coding, testing, non-functional requirements (reporting, production capacity, performance, security), and migration/deployment. Scope at high level The scope of this project includes the following activities, but not limited to: Develop new integration pipelines with SC360 Data bricks, Azure functions, Azure data factory, Azure DevOps, Cosmos dB, Oracle, Azure SQL, SSIS Packages. Work in alignment with business teams to support development effort for all SC360 data related PI items. Develop fixes for defects, issues identified in production environment. Build POCs as needed to supplement SC360 platform. Develop and implement architectural changes as needed in SC360 platform to increase efficiency, reduce cost, and monitor the platform. Provide production support assistance as needed. NFR includes but not limited to able to build according to UPS Coding Standards including Security Compliance Required Skills General skills Strong communication skills (both oral and written) Will need to work closely with UPS IT, Business Product Owners, and potential direct engagement with UPS customers. Agile life-cycle management Vulnerability/Threat Analysis Testing Deployments across environments and segregation of duties Technical skills Mandatory. Experience with Azure Data bricks, SQL, ETL SSIS Packages Very Critical. Azure Data Factory, Function Apps, DevOps A must Experience with Azure and other cloud technologies Database Oracle, SQL Server and COSMOS experience needed. Azure Services (key vault, app config, Blob storage, Redis cache, service bus, event grid, ADLS, App insight etc.) Knowledge of STRIIMs Nice to have. Microservicesexperience, preferred. Experience with Angular, .NET core Not critical
Posted 2 days ago
8.0 - 12.0 years
25 - 30 Lacs
Chennai
Hybrid
Job Title: Senior Data Developer Azure ADF and Databricks Experience Range: 8-12 Years Location: Chennai, Hybrid Employment Type: Full-Time About the role We are seeking an experienced Senior Data Developer to join our data engineering team responsible for building and maintaining complex data solutions using Azure Data Factory (ADF), Azure Databricks , and Cosmos DB . The role involves designing and developing scalable data pipelines, implementing data transformations, and ensuring high data quality and performance. The Senior Data Developer will work closely with data architects, testers, and analysts to deliver robust data solutions that support strategic business initiatives. The ideal candidate should possess deep expertise in big data technologies, data integration, and cloud-native data engineering solutions on Microsoft Azure. This role also involves coaching junior developers, conducting code reviews, and driving strategic improvements in data architecture and design patterns. Key Responsibilities Data Solution Design and Development : Design and develop scalable and high-performance data pipelines using Azure Data Factory (ADF). Implement data transformations and processing using Azure Databricks. Develop and maintain NoSQL data models and queries in Cosmos DB. Optimize data pipelines for performance, scalability, and cost efficiency. Data Integration and Architecture: Integrate structured and unstructured data from diverse data sources. Collaborate with data architects to design end-to-end data flows and system integrations. Implement data security, governance, and compliance standards. Performance Tuning and Optimization: Monitor and tune data pipelines and processing jobs for performance and cost efficiency. Optimize data storage and retrieval strategies for Azure SQL and Cosmos DB. Collaboration and Mentoring: Collaborate with cross-functional teams including data testers, architects, and business analysts. Conduct code reviews and provide constructive feedback to improve code quality. Mentor junior developers, fostering best practices in data engineering and cloud development. Primary Skills Data Engineering: Azure Data Factory (ADF), Azure Databricks. Cloud Platform: Microsoft Azure (Data Lake Storage, Cosmos DB). Data Modeling: NoSQL data modeling, Data warehousing concepts. Performance Optimization: Data pipeline performance tuning and cost optimization. Programming Languages: Python, SQL, PySpark Secondary Skills DevOps and CI/CD: Azure DevOps, CI/CD pipeline design and automation. Security and Compliance: Implementing data security and governance standards. Agile Methodologies: Experience in Agile/Scrum environments. Leadership and Mentoring: Strong communication and coaching skills for team collaboration. Soft Skills Strong problem-solving abilities and attention to detail. Excellent communication skills, both verbal and written. Effective time management and organizational capabilities. Ability to work independently and within a collaborative team environment. Strong interpersonal skills to engage with cross-functional teams. Educational Qualifications Bachelor's degree in Computer Science, Engineering, Information Technology, or a related field. Relevant certifications in Azure and Data Engineering, such as: Microsoft Certified: Azure Data Engineer Associate Microsoft Certified: Azure Solutions Architect Expert Databricks Certified Data Engineer Associate or Professional About the Team As a Senior Data Developer , you will be working with a dynamic, cross-functional team that includes developers, product managers, and other quality engineers. You will be a key player in the quality assurance process, helping shape testing strategies and ensuring the delivery of high-quality web applications.
Posted 2 days ago
8.0 - 12.0 years
25 - 30 Lacs
Mumbai
Work from Office
Job Title: Senior Data Developer Azure ADF and Databricks Experience Range: 8-12 Years Location: Chennai, Hybrid Employment Type: Full-Time About the role We are seeking an experienced Senior Data Developer to join our data engineering team responsible for building and maintaining complex data solutions using Azure Data Factory (ADF), Azure Databricks , and Cosmos DB . The role involves designing and developing scalable data pipelines, implementing data transformations, and ensuring high data quality and performance. The Senior Data Developer will work closely with data architects, testers, and analysts to deliver robust data solutions that support strategic business initiatives. The ideal candidate should possess deep expertise in big data technologies, data integration, and cloud-native data engineering solutions on Microsoft Azure. This role also involves coaching junior developers, conducting code reviews, and driving strategic improvements in data architecture and design patterns. Key Responsibilities Data Solution Design and Development : Design and develop scalable and high-performance data pipelines using Azure Data Factory (ADF). Implement data transformations and processing using Azure Databricks. Develop and maintain NoSQL data models and queries in Cosmos DB. Optimize data pipelines for performance, scalability, and cost efficiency. Data Integration and Architecture: Integrate structured and unstructured data from diverse data sources. Collaborate with data architects to design end-to-end data flows and system integrations. Implement data security, governance, and compliance standards. Performance Tuning and Optimization: Monitor and tune data pipelines and processing jobs for performance and cost efficiency. Optimize data storage and retrieval strategies for Azure SQL and Cosmos DB. Collaboration and Mentoring: Collaborate with cross-functional teams including data testers, architects, and business analysts. Conduct code reviews and provide constructive feedback to improve code quality. Mentor junior developers, fostering best practices in data engineering and cloud development. Primary Skills Data Engineering: Azure Data Factory (ADF), Azure Databricks. Cloud Platform: Microsoft Azure (Data Lake Storage, Cosmos DB). Data Modeling: NoSQL data modeling, Data warehousing concepts. Performance Optimization: Data pipeline performance tuning and cost optimization. Programming Languages: Python, SQL, PySpark Secondary Skills DevOps and CI/CD: Azure DevOps, CI/CD pipeline design and automation. Security and Compliance: Implementing data security and governance standards. Agile Methodologies: Experience in Agile/Scrum environments. Leadership and Mentoring: Strong communication and coaching skills for team collaboration. Soft Skills Strong problem-solving abilities and attention to detail. Excellent communication skills, both verbal and written. Effective time management and organizational capabilities. Ability to work independently and within a collaborative team environment. Strong interpersonal skills to engage with cross-functional teams. Educational Qualifications Bachelor's degree in Computer Science, Engineering, Information Technology, or a related field. Relevant certifications in Azure and Data Engineering, such as: Microsoft Certified: Azure Data Engineer Associate Microsoft Certified: Azure Solutions Architect Expert Databricks Certified Data Engineer Associate or Professional About the Team As a Senior Data Developer , you will be working with a dynamic, cross-functional team that includes developers, product managers, and other quality engineers. You will be a key player in the quality assurance process, helping shape testing strategies and ensuring the delivery of high-quality web applications.
Posted 2 days ago
8.0 - 10.0 years
13 - 15 Lacs
Chennai
Remote
Job role: ETL Data Testing + Python Automation Location: Remote Notice Period: 30 Days Max Required Skills: 3+ years of hands-on experience in Python Automation Experience with ETL tools like Informatica Familiarity with Unix/Linux environments.
Posted 2 days ago
4.0 - 6.0 years
10 - 16 Lacs
Kolkata, Pune, Mumbai (All Areas)
Hybrid
JOB TITLE: Software Developer II: Oracle Data Integrator (ODI)OVERVIEW OF THE ROLE:We are looking for an experienced Oracle Data Integrator (ODI) and Oracle Analytics Cloud(OAC) Consultant to join our dynamic team. You will be responsible for designing, implementing,and optimizing cutting-edge data integration and analytics solutions. Your contributions will bepivotal in enhancing data-driven decision-making and delivering actionable insights across theorganization.Key Responsibilities: Develop robust data integration solutions using Oracle Data Integrator (ODI). Create, optimize, and maintain ETL/ELT workflows and processes. Configure and manage Oracle Analytics Cloud (OAC) to provide interactive dashboards and advanced analytics. Integrate and transform data from various sources to generate meaningful insights using OAC. Monitor and troubleshoot data pipelines and analytics solutions to ensure optimal performance. Ensure data quality, accuracy, and integrity across integration and reporting systems. Provide training and support to end-users for OAC and ODI solutions. Analyze, design develop, fix and debug software programs for commercial or end user applications. Writes code, completes programming and performs testing and debuggingof applications.Technical Skills: Expertise in ODI components such as Topology, Designer, Operator, and Agent. Experience in Java and weblogic development. Proficiency in developing OAC dashboards, reports, and KPIs. Strong knowledge of SQL and PL/SQL for advanced data manipulation. Familiarity with Oracle databases and Oracle Cloud Infrastructure (OCI). Experience in data modeling and designing data warehouses. Strong analytical and problem-solving abilities. Excellent communication and client-facing skills. Hands-on, end to end DWH Implementation experience using ODI. Should have experience in developing ETL processes - ETL control tables, error logging, auditing, data quality, etc. Should be able to implement reusability, parameterization,workflow design, etc. Expertise in the Oracle ODI tool set and Oracle PL/SQL,ODI knowledge of ODI Master and work repository Knowledge of data modelling and ETL design Setting up topology, building objects in Designer, Monitoring Operator, different type of KMs, Agents etc Packaging components, database operations like Aggregate pivot, union etc. using ODI mappings, error handling, automation using ODI, Load plans, Migration of Objects Design and develop complex mappings, Process Flows and ETL scripts Experience of performance tuning of mappings Ability to design ETL unit test cases and debug ETL Mappings Expertise in developing Load Plans, Scheduling Jobs Ability to design data quality and reconciliation framework using ODI Integrate ODI with multiple Source / Target Experience on Error recycling / management using ODI,PL/SQL Expertise in database development (SQL/ PLSQL) for PL/SQL based applications. HASHEDIN BY DELOITTE 2025 Experience of creating PL/SQL packages, procedures, Functions , Triggers, views, Mat Views and exception handling for retrieving, manipulating, checking and migratingcomplex datasets in oracle Experience in Data Migration using SQL loader, import/export Experience in SQL tuning and optimization using explain plan and SQL trace files. Strong knowledge of ELT/ETL concepts, design and coding Partitioning and Indexing strategy for optimal performance. Should have experience of interacting with customers in understanding business requirement documents and translating them into ETL specifications and High and Lowlevel design documents. Ability to work with minimal guidance or supervision in a time critical environment. Experience: 4-6 Years of overall experience in Industry 3+ years of experience with Oracle Data Integrator (ODI) in data integration projects. 2+ years of hands-on experience with Oracle Analytics Cloud (OAC). Preferred Skills: Knowledge of Oracle Autonomous Data Warehouse (ADW) and Oracle Integration Cloud (OIC). Familiarity with other analytics tools like Tableau or Power BI. Experience with scripting languages such as Python or shell scripting. Understanding of data governance and security best practices. Educational Qualifications: Bachelors degree in Computer Science, Information Technology, Engineering, or related field.ABOUT HASHEDINWe are software engineers who solve business problems with a Product Mindset for leadingglobal organizations.By combining engineering talent with business insight, we build software and products that cancreate new enterprise value.The secret to our success is a fast-paced learning environment, an extreme ownership spirit,and a fun culture.WHY SHOULD YOU JOIN US?With the agility of a start-up and the opportunities of an enterprise, every day at HashedIn,your work will make an impact that matters.So, if you are a problem solver looking to thrive in a dynamic fun culture of inclusion,collaboration, and high performance – HashedIn is the place to be!From learning to leadership, this is your chance to take your software engineering career to thenext level.So, what impact will you make?Visit us @ https://hashedin.com
Posted 2 days ago
3.0 - 8.0 years
2 Lacs
Hyderabad
Work from Office
Key responsibilities: Understand the programs service catalog and document the list of tasks which has to be performed for each Lead the design, development, and maintenance of ETL processes to extract, transform, and load data from various sources into our data warehouse Implement best practices for data loading, ensuring optimal performance and data quality Utilize your expertise in IDMC to establish and maintain data governance, data quality, and metadata management processes Implement data controls to ensure compliance with data standards, security policies, and regulatory requirements Collaborate with data architects to design and implement scalable and efficient data architectures that support business intelligence and analytics requirements Work on data modeling and schema design to optimize database structures for ETL processes Identify and implement performance optimization strategies for ETL processes, ensuring timely and efficient data loading Troubleshoot and resolve issues related to data integration and performance bottlenecks Collaborate with cross-functional teams, including data scientists, business analysts, and other engineering teams, to understand data requirements and deliver effective solutions Provide guidance and mentorship to junior members of the data engineering team Create and maintain comprehensive documentation for ETL processes, data models, and data flows Ensure that documentation is kept up-to-date with any changes to data architecture or ETL workflows Use Jira for task tracking and project management Implement data quality checks and validation processes to ensure data integrity and reliability Maintain detailed documentation of data engineering processes and solutions Required Skills: Bachelor's degree in Computer Science, Engineering, or a related field Proven experience as a Senior ETL Data Engineer, with a focus on IDMC / IICS Strong proficiency in ETL tools and frameworks (e g , Informatica Cloud, Talend, Apache NiFi) Expertise in IDMC principles, including data governance, data quality, and metadata management Solid understanding of data warehousing concepts and practices Strong SQL skills and experience working with relational databases Excellent problem-solving and analytical skills Qualified candidates should APPLY NOW for immediate consideration! Please hit APPLY to provide the required information, and we will be back in touch as soon as possible Thank you! ABOUT INNOVA SOLUTIONS: Founded in 1998 and headquartered in Atlanta, Georgia, Innova Solutions employs approximately 50,000 professionals worldwide and reports an annual revenue approaching $3 Billion Through our global delivery centers across North America, Asia, and Europe, we deliver strategic technology and business transformation solutions to our clients, enabling them to operate as leaders within their fields Recent Recognitions: One of Largest IT Consulting Staffing firms in the USA Recognized as #4 by Staffing Industry Analysts (SIA 2022) ClearlyRated Client Diamond Award Winner (2020) One of the Largest Certified MBE Companies in the NMSDC Network (2022) Advanced Tier Services partner with AWS and Gold with MS
Posted 2 days ago
0 years
0 Lacs
Kolkata, West Bengal, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Leadership Skills: Should be able to manage a large team of 20+ professionals, demonstrating strategic leadership and team collaboration. Will be responsible for broader discretion of hiring and firing, recommendation of ratings and promotions basis the performance. Technical Skills: Advanced expertise in data analytics and data visualization techniques. Proficient in data engineering, including data transformation, integration, acquisition, preparation, modeling, and master data management. Extensive experience in implementing visualization projects using tools like MS Power BI, Tableau, Spotfire, and others. Proficient with Microsoft BI technologies (SQL, SSIS, SSAS, SSRS) and Azure-BI solutions such as ADF, Synapse, and Databricks. Programming skills in SQL, DAX, MDX, Python, Power Shell Scripting, Node JS, React JS, C++, with proficiency in databases like MS-SQL Server, Access, and MySQL. Proficiency in developing and managing Excel macro-based dashboards. Proficiency in performance tuning and optimization for processing large datasets, ensuring efficient data retrieval and minimal response times. Experience with partitioning strategies, indexing, and query optimization to manage and expedite data access in high-volume environments. Skilled in using big data technologies and distributed computing frameworks to address scalability and performance challenges in large-scale data processing. Expertise in designing and optimizing data pipelines and ETL processes to improve data flow and reduce bottlenecks in extensive datasets. Familiarity with advanced analytics and machine learning algorithms for efficient processing and analysis of massive datasets to derive actionable insights. Knowledge of cloud-based data services and tools for scalable storage, analysis, and management of large volumes of data, including Azure Synapse, Snowflake, and Amazon Redshift. Soft Skills: Effective communication, analytical thinking, and problem-solving abilities. Managerial Roles: As a Manager at EY GDS, one should be capable of designing and delivering analytics foundations, managing a team, constructing dashboards, and employing critical thinking to resolve complex audit and non-audit issues. The role involves developing, reviewing, and analyzing solution architecture, gathering and defining requirements, leading project design, and overseeing implementation. He/She is responsible for owning the engagement economics of the team, updating key findings to the leadership, and assisting in alignment in case of discrepancies. It is essential to align and collaborate with the Service Delivery Manager (SDM) from various Digital delivery regions to perform project scoping, estimations, and strategically drive the deliveries to success. The Manager should identify the high and low outliers in the team and help align low outliers with the right learning paths to support their career alignments. Own the learning and development of the team and periodically revisit the learnings and advice the team and align them as per the emerging market trends. Perform R&D and produce POC that can prove the various capability of the team in implementing advanced concepts in visualizations and organize calls with various groups to explain the features and benefits. Try and implement it in engagements to lead the success. Should have periodical alignment on resource performance deployed in various engagements. Prioritise and assist team in generating the automation savings with unique ideas and with the help of cutting-edge implementations. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less
Posted 2 days ago
4.0 - 8.0 years
8 - 12 Lacs
Chennai
Work from Office
About ValGenesis ValGenesis is a leading digital validation platform provider for life sciences companies. ValGenesis suite of products are used by 30 of the top 50 global pharmaceutical and biotech companies to achieve digital transformation, total compliance and manufacturing excellence/intelligence across their product lifecycle. Learn more about working for ValGenesis, the de facto standard for paperless validation in Life Sciences: https://www.youtube.com/watch?v=tASq7Ld0JsQ About the Role: We are looking for experienced database developer who could join our engineering team to build the enterprise applications for our global customers. If you are a technology enthusiast and have passion to develop enterprise cloud products with quality, security, and performance, we are eager to discuss with you about the potential role. Responsibilities Database Development Utilize expertise in MS SQL Server, PostgreSQL to design and develop efficient and scalable database solutions. Collaborate with development stakeholders to understand and implement database requirements. Write and optimize complex SQL queries, stored procedures, and functions. Database tuning & configurations of servers Knowledge on both Cloud & on-premises database Knowledge of SaaS based applications development ETL Integration Leverage experience with ETL tools such as ADF and SSIS to facilitate seamless data migration. Design and implement data extraction, transformation, and loading processes. Ensure data integrity during the ETL process and troubleshoot any issues that may arise. Reporting Develop and maintain SSRS reports based on customer needs. Collaborate with stakeholders to understand reporting requirements and implement effective solutions. Performance Tuning Database performance analysis using Dynatrace, NewRelic or similar tools. Analyze query performance and implement tuning strategies to optimize database performance. Conduct impact analysis and resolve production issues within specified SLAs. Version Control and Collaboration Utilize GIT and SVN for version control of database scripts and configurations. Collaborate with cross-functional teams using tools such as JIRA for story mapping, tracking, and issue resolution. Documentation Document database architecture, processes, and configurations. Provide detailed RCA (Root Cause Analysis) for any database-related issues. Requirements 6 - 9 years of hands-on experience in software development. Must have extensive experience in stored procedure development and performance fine tuning. Proficient in SQL, MS SQL Server, SSRS, and SSIS. Working knowledge of C# ASP.NET web application development. Ability to grasp new concepts and facilitate continuous learning. Strong sense of responsibility and accountability. We’re on a Mission In 2005, we disrupted the life sciences industry by introducing the world’s first digital validation lifecycle management system. ValGenesis VLMS® revolutionized compliance-based corporate validation activities and has remained the industry standard. Today, we continue to push the boundaries of innovation enhancing and expanding our portfolio beyond validation with an end-to-end digital transformation platform. We combine our purpose-built systems with world-class consulting services to help every facet of GxP meet evolving regulations and quality expectations. The Team You’ll Join Our customers’ success is our success. We keep the customer experience centered in our decisions, from product to marketing to sales to services to support. Life sciences companies exist to improve humanity’s quality of life, and we honor that mission. We work together. We communicate openly, support each other without reservation, and never hesitate to wear multiple hats to get the job done. We think big. Innovation is the heart of ValGenesis. That spirit drives product development as well as personal growth. We never stop aiming upward. We’re in it to win it. We’re on a path to becoming the number one intelligent validation platform in the market, and we won’t settle for anything less than being a market leader. How We Work Our Chennai, Hyderabad and Bangalore offices are onsite, 5 days per week. We believe that in-person interaction and collaboration fosters creativity, and a sense of community, and is critical to our future success as a company. ValGenesis is an equal-opportunity employer that makes employment decisions on the basis of merit. Our goal is to have the best-qualified people in every job. All qualified applicants will receive consideration for employment without regard to race, religion, sex, sexual orientation, gender identity, national origin, disability, or any other characteristics protected by local law.
Posted 2 days ago
3.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Company Size Mid-Sized Experience Required 3 - 6 years Working Days 5 days/week Office Location Karnataka, Bengaluru Role & Responsibilities Hopscotch is looking for a passionate Data Engineer to join our team. You will work closely with other teams like data analytics, marketing, data science and individual product teams to specify, validate, prototype, scale, and deploy data pipelines features and data architecture. Ability to work in a fast-paced startup mindset. Should be able to manage all aspects of data extraction transfer and load activities. Develop data pipelines that make data available across platforms. Should be comfortable in executing ETL (Extract, Transform and Load) processes which include data ingestion, data cleaning and curation into a data warehouse, database, or data platform. Work on various aspects of the AI/ML ecosystem – data modeling, data and ML pipelines. Work closely with Devops and senior Architect to come up with scalable system and model architectures for enabling real-time and batch services. Ideal Candidate 5+ years of experience as a data engineer or data scientist with a focus on data engineering and ETL jobs. Well versed with the concept of Data warehousing, Data Modelling and/or Data Analysis. Experience using & building pipelines and performing ETL with industry-standard best practices on Redshift (more than 2+ years). Ability to troubleshoot and solve performance issues with data ingestion, data processing & query execution on Redshift. Good understanding of orchestration tools like Airflow. Strong Python and SQL coding skills. Strong Experience in distributed systems like spark. Experience with AWS Data and ML Technologies (AWS Glue,MWAA, Data Pipeline,EMR,Athena, Redshift,Lambda etc). Solid hands on with various data extraction techniques like CDC or Time/batch based and the related tools (Debezium, AWS DMS, Kafka Connect, etc) for near real time and batch data extraction. Perks, Benefits and Work Culture Work with cutting-edge technologies on high-impact systems. Be part of a collaborative and technically driven team. Enjoy flexible work options and a culture that values learning. Competitive salary, benefits, and growth opportunities. Skills: teams,aws,data extraction,etl,data engineering,data warehousing,cdc,aws data technologies,data modeling,airflow,pipelines,data,spark,kafka connect,sql,python,load,redshift,ml Show more Show less
Posted 2 days ago
1.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Company Size Mid-Sized Experience Required 1 - 4 years Working Days 5 days/week Office Location Karnataka, Bengaluru Role & Responsibilities Hopscotch is the category creator offering Indian parents fashion for all occasions in a child’s life. If you join the Hopscotch team, you will be partnering with top pedigree managers in a fast-paced and rapidly growing environment. We are seeking a highly motivated and analytical Business Analyst to join our Data Analytics Team. In this role, you will play a critical part in turning raw data into actionable insights that support business decisions and strategic initiatives. You will work closely with cross-functional teams and directly engage with business stakeholders to understand data requirements, design robust data pipelines, and deliver impactful analyses. Collaborate with stakeholders across departments to gather and translate business requirements into data models and analytical solutions. Act as a key point of contact for business teams, ensuring their analytical needs are clearly understood and addressed effectively. Design, develop, and maintain ETL pipelines to ensure seamless data flow across systems. Perform advanced SQL queries to extract, manipulate, and analyze large datasets from multiple sources. Utilize Python to automate data workflows, perform exploratory data analysis (EDA), and build data transformation scripts. Leverage AWS tools (such as S3, Redshift, Glue, Lambda) for data storage, processing, and pipeline orchestration. Develop dashboards and reports to visualize key metrics and insights for business leadership. Conduct deep-dive analyses on business performance, customer behavior, and operational efficiencies to identify growth opportunities. Ensure data accuracy, integrity, and security throughout all analytics processes. Ideal Candidate Bachelor’s degree in Computer Science, Data Science, Engineering, Business Analytics, or a related field. 2+ years of experience in data analytics, business intelligence, or a similar role. Proficient in Advanced SQL for complex data manipulation and performance optimization. Intermediate proficiency in Python for data processing and automation (Pandas, NumPy, etc.). Experience with building and maintaining ETL pipelines. Familiarity with AWS Data Services (e.g., S3, Glue, Lambda, Athena). Strong analytical skills with a solid understanding of statistical methods and business performance metrics. Experience with data visualization tools like Tableau, Metabase. Excellent communication and interpersonal skills with the ability to engage directly with business stakeholders and translate their needs into actionable data solutions. Strong problem-solving skills and ability to work in a fast-paced, collaborative environment. Perks, Benefits and Work Culture Work with cutting-edge technologies on high-impact systems. Be part of a collaborative and technically driven team. Enjoy flexible work options and a culture that values learning. Competitive salary, benefits, and growth opportunities. Skills: python,etl pipelines,data analytics,analytics,sql,data,aws glue,data visualization (tableau, metabase),aws lambda,aws s3,business analyst,advanced sql,aws,etl,business intelligence Show more Show less
Posted 2 days ago
5.0 - 8.0 years
9 - 15 Lacs
Hyderabad, Chennai, Bengaluru
Hybrid
Job Description : As a Data Engineer for our Large Language Model Project, you will play a crucial role in designing, implementing, and maintaining the data infrastructure. Your expertise will be instrumental in ensuring the efficient flow of data, enabling seamless integration with various components, and optimizing data processing pipelines. 5+ years of relevant experience in data engineering roles. Key Responsibilities : Data Pipeline Development - Design, develop, and maintain scalable and efficient data pipelines to support the training and deployment of large language models. Implement ETL processes to extract, transform, and load diverse datasets into suitable formats for model training. Data Integration - Collaborate with cross-functional teams, including data scientists and software engineers, to integrate data sources and ensure the availability of relevant and high-quality data. Implement solutions for real-time data processing and integration, fostering model development agility. Data Quality Assurance - Establish and maintain robust data quality checks and validation processes to ensure the accuracy and consistency of datasets. Troubleshoot data quality issues, identify root causes, and implement corrective measures. Infrastructure Management - Work closely with DevOps and IT teams to manage and optimize the data storage infrastructure, ensuring scalability and performance. Implement best practices for data security, access control, and compliance with data governance policies. Performance Optimization - Identify bottlenecks and inefficiencies in data processing pipelines and implement optimizations to enhance overall system performance. Continuously monitor and evaluate system performance metrics, making proactive adjustments as needed. Skills & Tools Programming Languages - Proficiency in languages such as Python for building robust data processing applications. Big Data Technologies - Experience with distributed computing frameworks like Apache Spark, Databricks & DBT for large-scale data processing. Database Systems - In-depth knowledge of both relational databases (e.g., MySQL, PostgreSQL) and NoSQL databases (e.g., Vector databases, MongoDB, Cassandra etc). Data Warehousing - Familiarity with data warehousing solutions such as Amazon Redshift, Google BigQuery, or Snowflake. ETL Tools - Hands-on experience with ETL tools like Apache NiFi, Talend, or Apache Airflow. Knowledge of NLP will be an added advantage. Cloud Services - Experience with cloud platforms like AWS, Azure, or Google Cloud for deploying and managing data infrastructure. Problem Solving - Analytical mindset with a proactive approach to identifying and solving complex data engineering challenges.
Posted 2 days ago
2.0 - 6.0 years
4 - 8 Lacs
Bengaluru
Work from Office
As an Associate Software Developer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets. In this role, your responsibilities may include: Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise search applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviours. Build teams or writing programs to cleanse and integrate data in an efficient and reusable manner, developing predictive or prescriptive models, and evaluating modelling results Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Experience in Big Data Technology like Hadoop, Apache Spark, Hive. Practical experience in Core Java (1.8 preferred) /Python/Scala. Having experience in AWS cloud services including S3, Redshift, EMR etc. Strong expertise in RDBMS and SQL. Good experience in Linux and shell scripting. Experience in Data Pipeline using Apache Airflow Preferred technical and professional experience You thrive on teamwork and have excellent verbal and written communication skills. Ability to communicate with internal and external clients to understand and define business needs, providing analytical solutions Ability to communicate results to technical and non-technical audiences
Posted 2 days ago
2.0 - 6.0 years
4 - 8 Lacs
Kochi
Work from Office
As an Associate Software Developer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets. In this role, your responsibilities may include: Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise search applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviours. Build teams or writing programs to cleanse and integrate data in an efficient and reusable manner, developing predictive or prescriptive models, and evaluating modelling results Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Experience in Big Data Technology like Hadoop, Apache Spark, Hive. Practical experience in Core Java (1.8 preferred) /Python/Scala. Having experience in AWS cloud services including S3, Redshift, EMR etc. Strong expertise in RDBMS and SQL. Good experience in Linux and shell scripting. Experience in Data Pipeline using Apache Airflow Preferred technical and professional experience You thrive on teamwork and have excellent verbal and written communication skills. Ability to communicate with internal and external clients to understand and define business needs, providing analytical solutions Ability to communicate results to technical and non-technical audiences
Posted 2 days ago
2.0 - 5.0 years
4 - 7 Lacs
Pune
Work from Office
As a BigData Engineer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets In this role, your responsibilities may include: As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Big Data Developer, Hadoop, Hive, Spark, PySpark, Strong SQL. Ability to incorporate a variety of statistical and machine learning techniques. Basic understanding of Cloud (AWS,Azure, etc) . Ability to use programming languages like Java, Python, Scala, etc., to build pipelines to extract and transform data from a repository to a data consumer Ability to use Extract, Transform, and Load (ETL) tools and/or data integration, or federation tools to prepare and transform data as needed. Ability to use leading edge tools such as Linux, SQL, Python, Spark, Hadoop and Java Preferred technical and professional experience Basic understanding or experience with predictive/prescriptive modeling skills You thrive on teamwork and have excellent verbal and written communication skills. Ability to communicate with internal and external clients to understand and define business needs, providing analytical solutions
Posted 2 days ago
2.0 - 5.0 years
4 - 7 Lacs
Navi Mumbai
Work from Office
As a Data Engineer at IBM, you'll play a vital role in the development, design of application, provide regular support/guidance to project teams on complex coding, issue resolution and execution. Your primary responsibilities include: Lead the design and construction of new solutions using the latest technologies, always looking to add business value and meet user requirements. Strive for continuous improvements by testing the build solution and working under an agile framework. Discover and implement the latest technologies trends to maximize and build creative solutions Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Experience with Apache Spark (PySpark): In-depth knowledge of Sparks architecture, core APIs, and PySpark for distributed data processing. Big Data Technologies: Familiarity with Hadoop, HDFS, Kafka, and other big data tools. Data Engineering Skills: Strong understanding of ETL pipelines, data modeling, and data warehousing concepts. Strong proficiency in Python: Expertise in Python programming with a focus on data processing and manipulation. Data Processing Frameworks: Knowledge of data processing libraries such as Pandas, NumPy. SQL Proficiency: Experience writing optimized SQL queries for large-scale data analysis and transformation. Cloud Platforms: Experience working with cloud platforms like AWS, Azure, or GCP, including using cloud storage systems Preferred technical and professional experience Define, drive, and implement an architecture strategy and standards for end-to-end monitoring. Partner with the rest of the technology teams including application development, enterprise architecture, testing services, network engineering, Good to have detection and prevention tools for Company products and Platform and customer-facing
Posted 2 days ago
2.0 - 5.0 years
4 - 7 Lacs
Pune
Work from Office
As Data Engineer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. Youll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, youll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, youll tackle obstacles related to database integration and untangle complex, unstructured data sets. In this role, your responsibilities may include: Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise search applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviours. Build teams or writing programs to cleanse and integrate data in an efficient and reusable manner, developing predictive or prescriptive models, and evaluating modelling results Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Expertise in designing and implementing scalable data warehouse solutions on Snowflake, including schema design, performance tuning, and query optimization. Strong experience in building data ingestion and transformation pipelines using Talend to process structured and unstructured data from various sources. Proficiency in integrating data from cloud platforms into Snowflake using Talend and native Snowflake capabilities. Hands-on experience with dimensional and relational data modelling techniques to support analytics and reporting requirements Preferred technical and professional experience Understanding of optimizing Snowflake workloads, including clustering keys, caching strategies, and query profiling. Ability to implement robust data validation, cleansing, and governance frameworks within ETL processes. Proficiency in SQL and/or Shell scripting for custom transformations and automation tasks
Posted 2 days ago
2.0 - 5.0 years
4 - 7 Lacs
Mumbai
Work from Office
As a Data Engineer at IBM, you'll play a vital role in the development, design of application, provide regular support/guidance to project teams on complex coding, issue resolution and execution. Your primary responsibilities include: Lead the design and construction of new solutions using the latest technologies, always looking to add business value and meet user requirements. Strive for continuous improvements by testing the build solution and working under an agile framework. Discover and implement the latest technologies trends to maximize and build creative solutions Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Experience with Apache Spark (PySpark): In-depth knowledge of Sparks architecture, core APIs, and PySpark for distributed data processing. Big Data Technologies: Familiarity with Hadoop, HDFS, Kafka, and other big data tools. Data Engineering Skills: Strong understanding of ETL pipelines, data modelling, and data warehousing concepts. Strong proficiency in Python: Expertise in Python programming with a focus on data processing and manipulation. Data Processing Frameworks: Knowledge of data processing libraries such as Pandas, NumPy. SQL Proficiency: Experience writing optimized SQL queries for large-scale data analysis and transformation. Cloud Platforms: Experience working with cloud platforms like AWS, Azure, or GCP, including using cloud storage systems Preferred technical and professional experience Define, drive, and implement an architecture strategy and standards for end-to-end monitoring. Partner with the rest of the technology teams including application development, enterprise architecture, testing services, network engineering, Good to have detection and prevention tools for Company products and Platform and customer-facing
Posted 2 days ago
7.0 - 12.0 years
15 - 30 Lacs
Pune
Work from Office
Greeting of the Day !! We have job opening for Power BI+ Azure With one of our clients . Please apply if u can join within 15 to 20 days . Below is mandtory Skill : Power BI , Azure , ETL.(Will not consider any profiles if mandatory skill experience not there) Job Description: The candidate should have good Power BI hands on experience with robust background in Azure data modeling and ETL (Extract, Transform, Load) processes. The candidate should have essential hands-on experience with Advanced SQL and python. Proficiency in building Data Lake and pipelines using Azure. MS Fabric Implementation experience. Additionally, knowledge and experience in Quality domain; and holding Azure certifications, are considered a plus. Required Skills: 7 + years of experience in software engineering, with a focus on data engineering. Proven 5+ year of extensive hands-on experience in Power BI report development. Proven 3+ in data analytics, with a strong focus on Azure data services. Strong experience in data modeling and ETL processes. Advanced Hands-on SQL and Python knowledge and experience working with relational databases for data querying and retrieval. Drive best practices in data engineering, data modeling, data integration, and data visualization to ensure the reliability, scalability, and performance of data solutions. Should be able to work independently end to end and guide other team members. Exposure to Microsoft Fabric is good to have. Good knowledge of SAP and quality processes. Excellent business communication skills. Good data analytical skills to analyze data and understand business requirements. Excellent knowledge of SQL for performing data analysis and performance tuning Ability to test and document end-to-end processes Proficient in MS Office suite (Word, Excel, PowerPoint, Access, Visio) software Proven strong relationship-building and communication skills with team members and business users Excellent communication and presentation skills, with the ability to effectively convey technical concepts to non-technical stakeholders. Partner with business stakeholders to understand their data requirements, challenges, and opportunities, and identify areas where data analytics can drive value. Desired Skills: Extensive hands-on experience with Power BI. Proven experience 5+ in data analytics with a strong focus on Azure data services and Power BI. Exposure to Azure Data Factory, Azure Synapse Analytics, Azure Databricks. Solid understanding of data visualization and engineering principles, including data modeling, ETL/ELT processes, and data warehousing concepts. Experience on Microsoft Fabric is good to have. Strong proficiency in SQL HANA Modelling experience is nice to have. Business objects, Tableau nice to have. Experience of working in Captive is a plus Excellent communication and interpersonal skills, with the ability to effectively collaborate with cross-functional teams and communicate technical concepts to non-technical stakeholders. Strong problem-solving skills and the ability to thrive in a fast-paced, dynamic environment. Responsibilities: Responsible for working with Quality and IT teams to design and implement data solutions. This includes responsibility for the method and processes used to translate business needs into functional and technical specifications. Design, develop, and maintain robust data models, ETL pipelines and visualizations. Responsible for building Power BI reports and dashboards. Responsible for building new Data Lake in Azure, expanding and optimizing our data platform and data pipeline architecture, as well as optimizing data flow and collection for cross-functional teams. Responsible for designing and developing solutions in Azure big data frameworks/tools: Azure Data Lake, Azure Data Factory, Fabric Develop and maintain Python scripts for data processing and automation. Troubleshoot and resolve data-related issues and provide support for escalated technical problems. Process Improvement Ensure data quality and integrity across various data sources and systems. Maintain quality of data in the warehouse, ensuring integrity of data in the warehouse, correcting any data problems Participate in code reviews and contribute to best practices for data engineering. Ensure data security and compliance with relevant regulations and best practices. Develop standards, process flows and tools that promote and facilitate the mapping of data sources, documenting interfaces and data movement across the enterprise. Ensure design meets the requirements Education: IT Graduate (BE, BTech, MCA) preferred
Posted 2 days ago
12.0 - 17.0 years
6 - 10 Lacs
Mumbai, Vikhroli
Work from Office
Oracle analytics - ADW solution architect with good knowledge of Oracle Data Integrator and Oracle Analytics Cloud. Hands on experience is must. Solution architect to design warehouse on Oracle ADW and implementing security Ability to design, implement, and maintain data integration solutions (using Oracle Data Integrator or ODI) with the skills to build and optimize data visualizations and reports in Oracle Analytics Cloud (OAC). This role involves working with diverse data sources (Oracle Fusion ERP/Procurement cloud, SAP success Factor, Salesforce and on-premises databases) , transforming them, and delivering insights through dashboards and reports. Manage team of junior developers to deliver warehouse needs Good communications skills and experience working on Financial warehouse. Good understanding of Finance reporting needs. Qualifications Any graduate with 12+ years of Technology experience 8+ years of experience working on Oracle analytics, ADW on cloud Additional Information Certifications - good to have Job Location
Posted 2 days ago
1.0 - 6.0 years
5 - 9 Lacs
Hyderabad
Work from Office
to be added Stay up to date on everything Blackbaud, follow us on Linkedin, X, Instagram, Facebook and YouTube Blackbaud is proud to be an equal opportunity employer and is committed to maintaining an inclusive work environment. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, physical or mental disability, age, or veteran status or any other basis protected by federal, state, or local law.
Posted 2 days ago
3.0 - 8.0 years
11 - 16 Lacs
Bengaluru
Work from Office
As a Data Engineer , you are required to Design, build, and maintain data pipelines that efficiently process and transport data from various sources to storage systems or processing environments while ensuring data integrity, consistency, and accuracy across the entire data pipeline. Integrate data from different systems, often involving data cleaning, transformation (ETL), and validation. Design the structure of databases and data storage systems, including the design of schemas, tables, and relationships between datasets to enable efficient querying. Work closely with data scientists, analysts, and other stakeholders to understand their data needs and ensure that the data is structured in a way that makes it accessible and usable. Stay up-to-date with the latest trends and technologies in the data engineering space, such as new data storage solutions, processing frameworks, and cloud technologies. Evaluate and implement new tools to improve data engineering processes. Qualification Bachelor's or Master's in Computer Science & Engineering, or equivalent. Professional Degree in Data Science, Engineering is desirable. Experience level At least 3 - 5 years hands-on experience in Data Engineering, ETL. Desired Knowledge & Experience Spark: Spark 3.x, RDD/DataFrames/SQL, Batch/Structured Streaming Knowing Spark internalsCatalyst/Tungsten/Photon Databricks: Workflows, SQL Warehouses/Endpoints, DLT, Pipelines, Unity, Autoloader IDE: IntelliJ/Pycharm, Git, Azure Devops, Github Copilot Test: pytest, Great Expectations CI/CD Yaml Azure Pipelines, Continuous Delivery, Acceptance Testing Big Data Design: Lakehouse/Medallion Architecture, Parquet/Delta, Partitioning, Distribution, Data Skew, Compaction Languages: Python/Functional Programming (FP) SQL TSQL/Spark SQL/HiveQL Storage Data Lake and Big Data Storage Design additionally it is helpful to know basics of: Data Pipelines ADF/Synapse Pipelines/Oozie/Airflow Languages: Scala, Java NoSQL : Cosmos, Mongo, Cassandra Cubes SSAS (ROLAP, HOLAP, MOLAP), AAS, Tabular Model SQL Server TSQL, Stored Procedures Hadoop HDInsight/MapReduce/HDFS/YARN/Oozie/Hive/HBase/Ambari/Ranger/Atlas/Kafka Data Catalog Azure Purview, Apache Atlas, Informatica Required Soft skills & Other Capabilities Great attention to detail and good analytical abilities. Good planning and organizational skills Collaborative approach to sharing ideas and finding solutions Ability to work independently and also in a global team environment.
Posted 2 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
The ETL (Extract, Transform, Load) job market in India is thriving with numerous opportunities for job seekers. ETL professionals play a crucial role in managing and analyzing data effectively for organizations across various industries. If you are considering a career in ETL, this article will provide you with valuable insights into the job market in India.
These cities are known for their thriving tech industries and often have a high demand for ETL professionals.
The average salary range for ETL professionals in India varies based on experience levels. Entry-level positions typically start at around ₹3-5 lakhs per annum, while experienced professionals can earn upwards of ₹10-15 lakhs per annum.
In the ETL field, a typical career path may include roles such as: - Junior ETL Developer - ETL Developer - Senior ETL Developer - ETL Tech Lead - ETL Architect
As you gain experience and expertise, you can progress to higher-level roles within the ETL domain.
Alongside ETL, professionals in this field are often expected to have skills in: - SQL - Data Warehousing - Data Modeling - ETL Tools (e.g., Informatica, Talend) - Database Management Systems (e.g., Oracle, SQL Server)
Having a strong foundation in these related skills can enhance your capabilities as an ETL professional.
Here are 25 interview questions that you may encounter in ETL job interviews:
As you explore ETL jobs in India, remember to showcase your skills and expertise confidently during interviews. With the right preparation and a solid understanding of ETL concepts, you can embark on a rewarding career in this dynamic field. Good luck with your job search!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.