Jobs
Interviews

34 Etl Workflows Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 10.0 years

5 - 10 Lacs

Pune, Maharashtra, India

On-site

As a Specialist in Data Visualization, you will be focused on designing and developing compelling data visualizations solutions to enable actionable insights & facilitate intuitive information consumption for internal business stakeholders. The ideal candidate will demonstrate competency in building user-centric visuals & dashboards that empower stakeholders with data driven insights & decision-making capability. Responsibilities: Develop user-centric scalable visualization and analytical solutions, leveraging complex data sources to create intuitive and insightful dashboards. Apply best practices in visualization design to enhance user experience and business impact. Drive business engagement, collaborating with stakeholders to define key metrics, KPIs, and reporting needs. Facilitate workshops to develop user stories, wireframes, and interactive visualizations. Partner with data engineering, data science, and IT teams to develop scalable business-friendly reporting solutions. Ensure adherence to data governance, privacy, and security best practices. Identify opportunities for automation, streamlining manual reporting processes through modern visualization technologies and self-service analytics enablement. Provide thought leadership, driving knowledge-sharing within the Data & Analytics organization while staying ahead of industry trends to enhance visualization capabilities. Continuously innovative on visualization best practices & technologies by reviewing external resources & marketplace Ensuring timely delivery of high-quality outputs, while creating and maintaining SOPs, KPI libraries, and other essential governance documents. Required Experience and skills: 5+ years of experience in business intelligence, insight generation, business analytics, data visualization, infographics, and interactive visual storytelling Hands-on expertise in BI and visualization tools such as Power BI, MicroStrategy, and ThoughtSpot Solid understanding of data engineering and modeling, including ETL workflows, Dataiku, Databricks, Informatica, and database technologies like Redshift and Snowflake, with programming skills in SQL and Python. Deep knowledge of pharma commercial data sources, including IQVIA, APLD, Claims, Payer, Salesforce, Financials, Veeva, Komodo, IPSOS, and other industry datasets to drive strategic insights. Experience in pharmaceutical commercial analytics, including Field Force Effectiveness, customer engagement, market performance assessment, as well as web, campaign, and digital engagement analytics. Strong problem-solving, communication, and project management skills, with the ability to translate complex data into actionable insights and navigate complex matrix environments efficiently. Strong product management mindset, ensuring analytical solutions are scalable, user-centric, and aligned with business needs, with experience in defining product roadmaps and managing solution lifecycles. Expertise in agile ways of working, including Agile/Scrum methodologies, iterative development, and continuous improvement in data visualization and analytics solutions.

Posted 1 day ago

Apply

2.0 - 5.0 years

2 - 5 Lacs

Pune, Maharashtra, India

On-site

As an Associate Specialist in Data Visualization, you will be developing compelling data visualizations solutions to enable actionable insights & facilitate intuitive information consumption for internal business stakeholders. The ideal candidate will demonstrate competency in building user-centric visuals & dashboards that empower stakeholders with data driven insights & decision-making capability. Responsibilities: Develop user-centric scalable visualization and analytical solutions, leveraging complex data sources to create intuitive and insightful dashboards. Apply best practices in visualization design to enhance user experience and business impact. Collaborating with Data Viz leads and stakeholders to define key metrics, KPIs, and reporting needs. Participate and contribute in workshops to develop user stories, wireframes, and interactive visualizations. Ensuring timely delivery of high-quality outputs, while creating and maintaining SOPs, KPI libraries, and other essential governance documents. Partner with data engineering, data science, and IT teams to develop scalable business-friendly reporting solutions. Ensure adherence to data governance, privacy, and security best practices. Identify opportunities for automation, streamlining manual reporting processes through modern visualization technologies and self-service analytics enablement. Continuously innovative on visualization best practices & technologies by reviewing external resources & marketplace Required Experience and skills: 2+ years of experience in business intelligence, insight generation, business analytics, data visualization, infographics, and interactive visual storytelling Hands-on expertise in BI and visualization tools such as Power BI, PowerApps, Qliksense, MicroStrategy, and ThoughtSpot Experience of data engineering and modeling, including ETL workflows, Dataiku, Databricks, Informatica, and database technologies like Redshift and Snowflake, with programming skills in SQL and Python. Knowledge of pharma commercial data sources, including IQVIA, APLD, Claims, Payer, Salesforce, Financials, Veeva, Komodo, IPSOS, and other industry datasets to drive strategic insights is plus Experience in pharmaceutical commercial analytics, including Field Force Effectiveness, customer engagement, market performance assessment, as we'll as web, campaign, and digital engagement analytics is plus Experience in agile ways of working, including Agile/Scrum methodologies, iterative development, and continuous improvement in data visualization and analytics solutions. Good problem-solving and communication skills, with the ability to interpret data and provide actionable insights while effectively working in cross-functional environments. Basic understanding of product management principles, focusing on developing user-centric, scalable analytical solutions aligned with business needs. Familiarity with agile ways of working, including exposure to Agile/Scrum methodologies and participation in iterative development and continuous improvement initiatives in data visualization and analytics projects. Strong learning agility, with the ability to quickly adapt to new tools, technologies, and business environments while continuously enhancing analytical and technical skills. Required Skills: Business Intelligence (BI), Data Visualization, Requirements Management, User Experience (UX) Design

Posted 1 day ago

Apply

2.0 - 5.0 years

2 - 5 Lacs

Hyderabad, Telangana, India

On-site

As an Associate Specialist in Data Visualization, you will be developing compelling data visualizations solutions to enable actionable insights facilitate intuitive information consumption for internal business stakeholders. The ideal candidate will demonstrate competency in building user-centric visuals dashboards that empower stakeholders with data driven insights decision-making capability. Responsibilities: Develop user-centric scalable visualization and analytical solutions, leveraging complex data sources to create intuitive and insightful dashboards. Apply best practices in visualization design to enhance user experience and business impact. Collaborating with Data Viz leads and stakeholders to define key metrics, KPIs, and reporting needs. Participate and contribute in workshops to develop user stories, wireframes, and interactive visualizations. Ensuring timely delivery of high-quality outputs, while creating and maintaining SOPs, KPI libraries, and other essential governance documents. Partner with data engineering, data science, and IT teams to develop scalable business-friendly reporting solutions. Ensure adherence to data governance, privacy, and security best practices. Identify opportunities for automation, streamlining manual reporting processes through modern visualization technologies and self-service analytics enablement. Continuously innovative on visualization best practices technologies by reviewing external resources marketplace Required Experience and skills: 2+ years of experience in business intelligence, insight generation, business analytics, data visualization, infographics, and interactive visual storytelling Hands-on expertise in BI and visualization tools such as Power BI, PowerApps, Qliksense, MicroStrategy, and ThoughtSpot Experience of data engineering and modeling, including ETL workflows, Dataiku, Databricks, Informatica, and database technologies like Redshift and Snowflake, with programming skills in SQL and Python. Knowledge of pharma commercial data sources, including IQVIA, APLD, Claims, Payer, Salesforce, Financials, Veeva, Komodo, IPSOS, and other industry datasets to drive strategic insights is plus Experience in pharmaceutical commercial analytics, including Field Force Effectiveness, customer engagement, market performance assessment, as well as web, campaign, and digital engagement analytics is plus Experience in agile ways of working, including Agile/Scrum methodologies, iterative development, and continuous improvement in data visualization and analytics solutions. Good problem-solving and communication skills, with the ability to interpret data and provide actionable insights while effectively working in cross-functional environments. Basic understanding of product management principles, focusing on developing user-centric, scalable analytical solutions aligned with business needs. Familiarity with agile ways of working, including exposure to Agile/Scrum methodologies and participation in iterative development and continuous improvement initiatives in data visualization and analytics projects. Strong learning agility, with the ability to quickly adapt to new tools, technologies, and business environments while continuously enhancing analytical and technical skills.

Posted 1 day ago

Apply

4.0 - 8.0 years

0 Lacs

delhi

On-site

The ideal candidate should possess extensive expertise in SQL, data modeling, ETL/ELT pipeline development, and cloud-based data platforms like Databricks or Snowflake. You will be responsible for designing scalable data models, managing reliable data workflows, and ensuring the integrity and performance of critical financial datasets. Collaboration with engineering, analytics, product, and compliance teams is a key aspect of this role. Responsibilities: - Design, implement, and maintain logical and physical data models for transactional, analytical, and reporting systems. - Develop and oversee scalable ETL/ELT pipelines to process large volumes of financial transaction data. - Optimize SQL queries, stored procedures, and data transformations for enhanced performance. - Create and manage data orchestration workflows using tools like Airflow, Dagster, or Luigi. - Architect data lakes and warehouses utilizing platforms such as Databricks, Snowflake, BigQuery, or Redshift. - Ensure adherence to data governance, security, and compliance standards (e.g., PCI-DSS, GDPR). - Work closely with data engineers, analysts, and business stakeholders to comprehend data requirements and deliver solutions. - Conduct data profiling, validation, and quality assurance to maintain clean and consistent data. - Maintain comprehensive documentation for data models, pipelines, and architecture. Required Skills & Qualifications: - Proficiency in advanced SQL, including query tuning, indexing, and performance optimization. - Experience in developing ETL/ELT workflows with tools like Spark, dbt, Talend, or Informatica. - Familiarity with data orchestration frameworks such as Airflow, Dagster, Luigi, etc. - Hands-on experience with cloud-based data platforms like Databricks, Snowflake, or similar technologies. - Deep understanding of data warehousing principles like star/snowflake schema, slowly changing dimensions, etc. - Knowledge of cloud services (AWS, GCP, or Azure) and data security best practices. - Strong analytical and problem-solving skills in high-scale environments. Preferred Qualifications: - Exposure to real-time data pipelines like Kafka, Spark Streaming. - Knowledge of data mesh or data fabric architecture paradigms. - Certifications in Snowflake, Databricks, or relevant cloud platforms. - Familiarity with Python or Scala for data engineering tasks.,

Posted 1 day ago

Apply

6.0 - 10.0 years

0 Lacs

karnataka

On-site

As a Senior Database Developer at Kobie's India Tech Hub, you will have the opportunity to be one of the first hires in an exciting new venture. Kobie, a leading player in the loyalty industry, is expanding its global footprint by establishing a Tech Hub in India. This initiative aims to create deep connections with customers through personalized, data-driven loyalty experiences, further enhancing enterprise value through loyalty. Your role will involve designing scalable, data-driven solutions for high-impact loyalty platforms, leveraging your expertise in PL/pgSQL, efficient SQL queries, performance tuning, and ETL workflows. You will work with Oracle and/or PostgreSQL databases, handling complex data structures to support data integration, transformation, and analytics. As a key member of the team, you will be responsible for developing and maintaining database solutions that facilitate client onboarding, reward processing, data quality assurance, and operational performance. Collaboration with cross-functional teams such as developers, QA specialists, analysts, and DBAs will be essential to optimize data pipelines and queries, ensuring they meet the evolving needs of clients and marketing platforms. Your impact will be significant as you contribute to import and extract processes, data migration efforts, troubleshooting data quality and performance issues, tuning queries for optimal performance, supporting data integration from various sources, and providing technical assistance to stakeholders. Your ability to work both independently and collaboratively, manage priorities effectively, and communicate with technical and non-technical team members will be crucial for success. To excel in this role, you should have 5-7+ years of experience in SQL query design and maintenance, proficiency in Oracle and/or PostgreSQL, expertise in performance tuning, ETL development, data security, and a track record of working in team environments. Bonus skills include experience with data mapping tools, modern cloud platforms like Snowflake, job scheduling automation, version control systems, and supporting Java-based development teams. At Kobie, known for its award-winning culture and innovative loyalty solutions, you will be part of a collaborative and growth-focused environment. As a trusted partner to global brands, Kobie focuses on building lasting emotional connections with consumers through strategy-led technology solutions. The launch of the India Tech Hub presents an exciting opportunity to be part of a culture that values diversity, equity, inclusion, and giving back to the community. Joining Kobie means access to competitive benefits, comprehensive health coverage, well-being perks, flexible time off, and opportunities for career growth. The integration of new teammates in India with U.S. teams, exposure to global projects, and the future establishment of a physical office in Bengaluru emphasize Kobie's commitment to collaboration and connection. This is your chance to be part of something significant and shape the future of the Kobie India Tech Hub. Apply now and contribute to delivering innovative customer experiences for renowned brands while working alongside industry leaders in the loyalty space.,

Posted 2 days ago

Apply

4.0 - 8.0 years

0 Lacs

haryana

On-site

As a Senior JEDOX Developer at Siemens Energy, your primary responsibility will involve working closely with global business users to address tickets submitted via SharePoint or Mailbox. You will collaborate with IT development and middleware teams to identify and implement solutions aligned with agreed operation and service level agreements. Additionally, you will play a key role in the monthly closing process, ensuring data accuracy and coordinating with end users. Attending sprint development meetings and engaging with collaborators and senior management will be essential to your role, helping you expand your network and prepare for future global responsibilities within Siemens Energy. Your impact will be significant as you lead the design, development, and implementation of data pipelines and ETL workflows. You will be tasked with managing and optimizing workflows for efficient data processing, designing data solutions in databases, and proactively developing reports with minimal documented requirements. Collaborating with cross-functional teams to translate requirements into scalable data architecture and fostering continuous improvement and innovation will be key aspects of your role. To excel in this position, you should have at least 6 years of experience in IT, preferably with a background in Engineering or a related field. Your expertise should include 4+ years of experience in ETL workflows, data analytics, reporting tools like Power BI and Tableau, and working with cloud databases such as SNOWFLAKE. Familiarity with EPM tools like JEDOX, ANAPLAN, or TM1, multidimensional database concepts, Power Automate workflows, and Excel formulas will be advantageous. Your ability to adapt to new technologies and thrive in a fast-paced environment, collaborate effectively with business users, and stay informed about industry trends are essential qualities for this role. Joining the Value Center Manufacturing team at Siemens Energy means being part of a dynamic group focused on driving digital transformation in manufacturing. You will contribute to innovative projects that impact the business and industry, playing a vital role in achieving Siemens Energy's objectives. The Digital Core team supports Business Areas by delivering top-notch IT, Strategy & Technology solutions. Siemens Energy is a global energy technology company with a diverse workforce committed to sustainable and reliable energy solutions. Our emphasis on diversity fuels our creativity and innovation, allowing us to harness the power of inclusion across over 130 nationalities. At Siemens Energy, we prioritize decarbonization, new technologies, and energy transformation to drive positive change in the energy sector. As a Siemens Energy employee, you will enjoy benefits such as Medical Insurance coverage for yourself and eligible family members, including a Family floater cover. Additionally, you will have the option to opt for a Meal Card as part of your CTC, providing tax-saving benefits as per company policy. Siemens Energy is dedicated to creating a supportive and inclusive work environment where individuals from all backgrounds can thrive and contribute to our shared success. Join us in shaping the future of energy and making a meaningful impact on society.,

Posted 2 days ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

As a Data Engineer, you will be responsible for designing, developing, and delivering ADF pipelines for the Accounting & Reporting Stream. Your role will involve creating and maintaining scalable data pipelines using PySpark and ETL workflows in Azure Databricks and Azure Data Factory. You will also work on data modeling and architecture to optimize data structures for analytics and business requirements. Your responsibilities will include monitoring, tuning, and troubleshooting pipeline performance for efficiency and reliability. Collaboration with business analysts and stakeholders is key to understanding data needs and delivering actionable insights. Implementing data governance practices to ensure data quality, security, and compliance with regulations is essential. You will also be required to develop and maintain documentation for data pipelines and architecture. Experience in testing and test automation is necessary for this role. Collaboration with cross-functional teams to comprehend data requirements and provide technical advice is crucial. Strong background in data engineering is required, with proficiency in SQL, Azure Databricks, Blob Storage, Azure Data Factory, and programming languages like Python or Scala. Knowledge of Logic App and Key Vault is also necessary. Strong problem-solving skills and the ability to communicate complex technical concepts to non-technical stakeholders are essential for effective communication within the team.,

Posted 3 days ago

Apply

4.0 - 10.0 years

0 Lacs

delhi

On-site

As an Ab Initio Developer with over 10 years of experience, your primary responsibility will be to develop and optimize ETL workflows and processes for data extraction, transformation, and loading. You should have a minimum of 4 years of hands-on experience in Ab Initio development and proficiency in different Ab Initio suite components. Your role will involve designing and implementing ETL processes for large-scale data sets, utilizing your strong understanding of ETL concepts, data warehousing principles, and relational databases. To excel in this role, you must possess solid knowledge of SQL and scripting languages for data manipulation and analysis. Your problem-solving skills will be crucial as you work independently or collaboratively within a team. Effective communication is key, as you will be interacting with stakeholders at various levels to ensure the successful execution of projects. Additionally, you will be responsible for performing unit testing, debugging, and troubleshooting of Ab Initio graphs and applications. This is essential to guaranteeing the accuracy and integrity of the data throughout the development process. If you are someone who thrives in a dynamic environment, has a passion for data transformation, and enjoys tackling complex challenges, this role offers you the opportunity to leverage your expertise and contribute significantly to the success of projects.,

Posted 3 days ago

Apply

2.0 - 6.0 years

0 Lacs

kochi, kerala

On-site

At EY, you'll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture, and technology to become the best version of yourself. EY is counting on your unique voice and perspective to help them become even better. Join EY to build an exceptional experience for yourself and contribute to creating a better working world for all. The Capacity Analyst role involves developing and implementing capacity plans for complex IT services, applications, and infrastructure components to ensure efficient use of IT Infrastructure to cost-effectively meet business needs. The position collaborates across the EY Technology organization to incorporate capacity management principles into the design and management of network/services/applications. Key Responsibilities: - Develop complex service capacity forecasts and scalability plans using production system metrics, performance test results, and business requirements. - Ensure efficient utilization of IT infrastructure to meet capacity and performance requirements for services and applications. - Work with service delivery organizations in EY Technology to develop business forecasts and incorporate them into service and component capacity plans. - Define component resource requirements for complex services/applications and develop scalability plans. - Monitor service capacity performance for complex applications/products/services and components to proactively avoid incidents/problems. - Perform problem determination of capacity/performance issues and recommend solutions. - Oversee the Capacity Management process for complex services and components. - Monitor capacity process compliance, recommend and implement process improvements. Skills and Attributes for Success: - Comprehensive understanding of Generative AI technologies and frameworks. - Extensive experience in data migration processes and Azure Data Factory. - Knowledge of virtualization technologies, MS SQL, Network, Server Hardware, Storage, and Cloud technologies. - Experience in Capacity Modelling and data analysis/reporting tools. - Understanding of Capacity Management methodologies based on ITIL principles. - Knowledge of global server architectures and planning for systems over multiple global locations. Qualifications: - Bachelor's degree in a related discipline or equivalent work experience. - 2-5 years of experience in Data Science with a focus on Data Engineering and Generative AI. - Strong problem-solving, communication, interpersonal, and analytic skills. - Knowledge of the IT Process Landscape. - Excellent English language skills. - Ability to adapt to changing priorities and work with diverse teams. - Certification in ITIL V3 Foundations or equivalent experience. Desired Skills: - Excellent communication and negotiation skills. - Flexibility to adjust to changing demands and work with diverse teams. - Strong teamwork, collaboration, documentation, and analytics skills. What We Offer: EY provides a highly integrated, global team environment with opportunities for growth and career development. The benefits package focuses on physical, emotional, financial, and social well-being. Continuous learning, transformative leadership, and a diverse and inclusive culture are key aspects of working at EY. EY is committed to building a better working world by creating long-term value for clients, people, and society. With diverse teams in over 150 countries, EY provides trust through assurance and helps clients grow, transform, and operate across various services. Please note that the role may require working outside normal hours and travel, both domestically and internationally, given its global focus and responsibilities.,

Posted 3 days ago

Apply

2.0 - 6.0 years

0 Lacs

pune, maharashtra

On-site

As a LLM Engineer at HuggingFace, you will play a crucial role in bridging the gap between advanced language models and real-world applications. Your primary focus will be on fine-tuning, evaluating, and deploying LLMs using frameworks such as HuggingFace and Ollama. You will be responsible for developing React-based applications with seamless LLM integrations through REST, WebSockets, and APIs. Additionally, you will work on building scalable pipelines for data extraction, cleaning, and transformation, as well as creating and managing ETL workflows for training data and RAG pipelines. Your role will also involve driving full-stack LLM feature development from prototype to production. To excel in this position, you should have at least 2 years of professional experience in ML engineering, AI tooling, or full-stack development. Strong hands-on experience with HuggingFace Transformers and LLM fine-tuning is essential. Proficiency in React, TypeScript/JavaScript, and back-end integration is required, along with comfort working with data engineering tools such as Python, SQL, and Pandas. Familiarity with vector databases, embeddings, and LLM orchestration frameworks is a plus. Candidates with experience in Ollama, LangChain, or LlamaIndex will be given bonus points. Exposure to real-time LLM applications like chatbots, copilots, or internal assistants, as well as prior work with enterprise or SaaS AI integrations, are highly valued. This role offers a remote-friendly environment with flexible working hours and a high-ownership opportunity. Join our small, fast-moving team at HuggingFace and be part of building the next generation of intelligent systems. If you are passionate about working on impactful AI products and have the drive to grow in this field, we would love to hear from you.,

Posted 3 days ago

Apply

2.0 - 6.0 years

0 Lacs

karnataka

On-site

You will be responsible for contributing to the development and continuous enhancement of a proprietary low-code or no-code platform for application development, ETL workflows, and data analytics. This will involve designing and implementing new features and functionalities using a microservices-based architecture with the MEAN stack, Python, and Elasticsearch. You will also be tasked with maintaining and optimizing existing components to ensure performance, scalability, and reliability. Collaborating with cross-functional teams to deliver robust and user-friendly solutions will be a key part of your role. Additionally, you will support and troubleshoot issues across the platform to ensure smooth operation and empower end-users by enabling application creation with minimal coding effort. Ensuring code quality through best practices, code reviews, and testing will also be part of your responsibilities. Furthermore, you will be expected to research and integrate new technologies to improve platform capabilities and performance. ViewZen Labs Private Limited is a DIPP-recognized start-up providing data collection and data analytics solutions. The company's platforms are designed to help stakeholders collect data quickly, visualize it, and benefit from it at very low costs. By letting clients focus on their core business while managing their data and providing actionable insights, the company aims to offer valuable solutions built on core technology platforms that combine deep industry research and domain expertise.,

Posted 3 days ago

Apply

4.0 - 8.0 years

0 Lacs

noida, uttar pradesh

On-site

We are looking for a Lead Database Developer-Oracle to join our technology team at Clarivate. As the Lead Database Developer, you will oversee the design, development, and maintenance of high-performance databases utilizing Oracle and PostgreSQL. You should have a Bachelor's degree in computer science, Information Technology, or a related field, or equivalent experience. A minimum of 8 years of experience in Oracle database environments and PostgreSQL is required, along with expertise in database performance tuning, query optimization, and capacity planning. Additionally, you should have at least 4 years of experience in cloud-based database services like AWS RDS. A solid understanding of data security, backup, and recovery procedures is necessary, along with knowledge of relational database concepts such as primary/foreign keys, many-to-many relationships, and complex join operations. Experience in system analysis, design, problem-solving, support, and troubleshooting is also expected. Familiarity with cloud database platforms such as AWS RDS and Azure Cloud would be advantageous. It would be beneficial if you have an in-depth understanding of database architecture and data modeling principles, as well as good knowledge of No-SQL database solutions, AWS, and Azure Db solutions and services. In this role, you will collaborate with the development team to design and implement efficient database structures that meet the organization's requirements. You will develop and maintain database schemas, tables, views, stored procedures, and functions. Monitoring and analyzing database performance to identify and resolve bottlenecks or performance issues will be a key responsibility. You will optimize queries, indexes, data schemas, and database configurations to enhance system performance and ensure data security and integrity by implementing and maintaining database security measures. Additionally, you will develop and maintain data integration processes, including Extract, Transform, and Load (ETL) workflows, and create comprehensive documentation for database environments. Working with DevOps, you will develop and maintain database backup and recovery strategies to ensure data integrity and availability. You will be part of a Database performance team that operates horizontally across the Intellectual Property pillar at Clarivate. The team works with various database genres, both in the cloud and on-premise, and encourages the support and deep knowledge of other specialist databases. Collaboration with cross-functional teams, including developers, system administrators, and business stakeholders to understand their database requirements and provide technical support is essential. This is a full-time opportunity with Clarivate, requiring 9 hours of work per day, including a lunch break. Clarivate is committed to providing equal employment opportunities for all qualified individuals with respect to hiring, compensation, promotion, training, and other terms, conditions, and privileges of employment, ensuring compliance with applicable laws and regulations governing non-discrimination in all locations.,

Posted 4 days ago

Apply

3.0 - 7.0 years

0 Lacs

haryana

On-site

As an experienced SQL Developer with a minimum of 3 years of relevant experience, you will be responsible for working with data storage and management. Your role will involve proficiency in MS SQL Server language, platform, and environment, including SSMS. You should have a Bachelor's or Master's degree in Computer Science, Information Systems, or a related discipline. Your expertise should include working with SQL Tables, data types, stored procedures, Views, Functions, and T-SQL. You should be skilled in Query Performance and Optimization, as well as have the ability to understand PL/SQL and develop/troubleshoot business logic migration into T-SQL. An understanding of relational data models and tools will be advantageous. Experience in developing ETL workflows, process improvement, and process automation related to database development will be a valuable addition to your skillset. You should also be able to collaborate effectively with clients, Database, Analyst, and Operations teams. If you are looking for a challenging role in SQL development, this opportunity offers a competitive salary package that is best in the industry.,

Posted 4 days ago

Apply

6.0 - 10.0 years

0 Lacs

hyderabad, telangana

On-site

Are you ready to make an impact at DTCC Do you want to work on innovative projects, collaborate with a dynamic and supportive team, and receive investment in your professional development At DTCC, we are at the forefront of innovation in the financial markets. We are committed to helping our employees grow and succeed. We believe that you have the skills and drive to make a real impact. We foster a thriving internal community and are committed to creating a workplace that looks like the world that we serve. Comprehensive health and life insurance and well-being benefits, based on location. Pension/Retirement benefits. Paid Time Off and Personal/Family Care, and other leaves of absence when needed to support your physical, financial, and emotional well-being. DTCC offers a flexible/hybrid model of 3 days onsite and 2 days remote (onsite Tuesdays, Wednesdays and a third day unique to each team or employee). The Impact You Will Have In This Role Enterprise Services comprises of multiple business platforms including Client Services, Global Business Operations, Business Architecture, Data Strategy and Analytics, and Digital Services, which report into the Chief of Enterprise Services. These grouped platforms enable the business to optimize delivery for clients, generate efficiencies and resilience, and enable consistency in the business digitization strategy, processes and end-to-end best practices. The skilled Automation Tester is experienced in testing applications developed in Appian, able to validate ETL workflows by querying and comparing result sets and has hands-on knowledge on testing applications developed using RPA tools like BluePrism. The Automation Tester is a self-starter with a strong ability to prioritize, own testing deliverables/timelines, understand various solution components, and clearly and effectively communicate results with the team. What You'll Do - Develop and execute test cases for applications developed in Appian, ensuring comprehensive coverage of both positive and negative scenarios. - Test workflows designed on Talend, focusing on data extraction, transformation, and loading processes. - Validate and verify automation (RPA) solutions developed using BluePrism, ensuring they meet business requirements and function as expected. - Gather and set up required test data for testing, ensuring data integrity and consistency. - Track test results and defects throughout the testing lifecycle, using tools like JIRA for defect management. - Coordinate with the user base for a successful roll-out during the user acceptance test phase, providing clear and concise feedback. - Independently manage multiple projects based on provided priorities to complete testing and provide feedback within given timelines. - Collaborate with other team members and analysts through the delivery cycle, ensuring seamless integration and communication. - Participate in an Agile delivery team that builds high-quality and scalable work products, contributing to sprint planning, reviews, and retrospectives. - Assist in the evaluation of upcoming technologies and contribute to the overall solution design, providing insights and recommendations. - Support production releases and maintenance windows, working closely with the Operations team to ensure smooth deployments. - Align risk and control processes into day-to-day responsibilities to monitor and mitigate risk; escalates appropriately. Qualifications Bachelor's degree preferred or equivalent experience. Talents Needed For Success - Minimum of 6 years of related experience in testing automation solutions. - Ability to create Scripts using Python. - Hands-on experience with test automation tools like Selenium, TestComplete, and UFT One. - Experience in using tools like BluePrism, UiPath, and Power Automate. - Strong understanding of SDLC and legacy technologies like MS Access and mainframe systems. - Ability to write and execute SQL queries to validate test results in SQL Server databases. - Experience in testing solutions built on Appian, with a focus on process automation and workflow management.,

Posted 4 days ago

Apply

3.0 - 7.0 years

0 Lacs

thane, maharashtra

On-site

Job Description As a Python Backend Engineer with exposure to AI engineering at Quantanite, you will be an integral part of our team responsible for building a scalable, cognitive data platform. Your role will involve designing and developing high-performance backend services using Python (FastAPI), developing RESTful APIs for data ingestion, transformation, and AI-based feature access, and collaborating closely with DevOps and data engineering teams to integrate backend services with Azure data pipelines and databases. Your primary responsibilities will include managing database schemas, writing complex SQL queries, and supporting ETL processes using Python-based tools. Additionally, you will be tasked with building secure, scalable, and production-ready services that adhere to best practices in logging, authentication, and observability. You will also implement background tasks and async event-driven workflows for data crawling and processing. In terms of AI engineering contributions, you will support the integration of AI models (NLP, summarization, information retrieval) within backend APIs. You will collaborate with the AI team to deploy lightweight inference pipelines using PyTorch, TensorFlow, or ONNX, and participate in training data pipeline design and minor model fine-tuning as needed for business logic. Furthermore, you will contribute to the testing, logging, and monitoring of AI agent behavior in production environments. To be successful in this role, you should have at least 3 years of experience in Python backend development, with a strong proficiency in FastAPI or equivalent frameworks. A solid understanding of RESTful API design, asynchronous programming, and web application architecture is essential. Additionally, you should demonstrate proficiency in working with relational databases (e.g., PostgreSQL, MS SQL Server) and Azure cloud services, as well as experience with ETL workflows, job scheduling, and data pipeline orchestration (Airflow, Prefect, etc.). Exposure to machine learning libraries (e.g., Scikit-learn, Transformers, OpenAI APIs) is a plus, along with familiarity with containerization (Docker), CI/CD practices, and performance tuning. A mindset of code quality, scalability, documentation, and collaboration is highly valued at Quantanite. If you are looking for a challenging yet rewarding opportunity to work in a collaborative environment with a focus on innovation and growth, we encourage you to apply to join our dynamic team at Quantanite.,

Posted 6 days ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

This is a full-time Data Engineer position with D Square Consulting Services Pvt Ltd, based in Pan-India with a hybrid work model. You should have at least 5 years of experience and be able to join immediately. As a Data Engineer, you will be responsible for designing, building, and scaling data pipelines and backend services supporting analytics and business intelligence platforms. A strong technical foundation, Python expertise, API development experience, and familiarity with containerized CI/CD-driven workflows are essential for this role. Your key responsibilities will include designing, implementing, and optimizing data pipelines and ETL workflows using Python tools, building RESTful and/or GraphQL APIs, collaborating with cross-functional teams, containerizing data services with Docker, managing deployments with Kubernetes, developing CI/CD pipelines using GitHub Actions, ensuring code quality, and optimizing data access and transformation. The required skills and qualifications for this role include a Bachelor's or Master's degree in Computer Science or a related field, 5+ years of hands-on experience in data engineering or backend development, expert-level Python skills, experience with building APIs using frameworks like FastAPI, Graphene, or Strawberry, proficiency in Docker, Kubernetes, SQL, and data modeling, good communication skills, familiarity with data orchestration tools, experience with streaming data platforms like Kafka or Spark, knowledge of data governance, security, and observability best practices, and exposure to cloud platforms like AWS, GCP, or Azure. If you are proactive, self-driven, and possess the required technical skills, then this Data Engineer position is an exciting opportunity for you to contribute to the development of cutting-edge data solutions at D Square Consulting Services Pvt Ltd.,

Posted 1 week ago

Apply

6.0 - 10.0 years

0 Lacs

pune, maharashtra

On-site

You are a highly skilled and experienced .NET Full Stack Developer with a minimum of 6 years of hands-on expertise in both backend and frontend development. You have robust experience with .NET/.NET Core, Microservices, ReactJS/NodeJS, and strong DevOps capabilities. This is a hybrid role across multiple Indian cities with a contract duration of 6 to 12 months. As a .NET Full Stack Developer, your key responsibilities include leading the design, development, testing, and deployment of scalable web applications using Microsoft technologies. You will collaborate in an Agile team environment to deliver end-to-end full stack solutions and build and maintain Microservices-based architectures. Developing clean, scalable code using .NET 8 or .NET Core and C#, implementing frontend solutions using ReactJS, Redux, Material UI, and Bootstrap (or NodeJS if applicable), and designing and managing robust CI/CD pipelines using Jenkins, GitHub/Bitbucket, and containerization technologies are essential parts of your role. You will utilize Azure or AWS services like Web Apps, AKS, Docker, Redis, Service Bus, App Insights, and more. Ensuring best practices in software engineering, participating in database design and performance tuning, and handling multi-shore delivery environments are also key responsibilities. To qualify for this position, you must hold a B.Tech / M.Tech in Computer Science or a related field and have at least 6 years of full stack expertise in both backend and frontend technologies. Strong working knowledge of Azure or AWS, Docker, Containers, AKS, Jenkins, Git, CI/CD pipelines, experience in relational databases and data design, and a deep understanding of Agile Scrum, Kanban, and Waterfall methodologies are required. Other skills such as LINQ, Entity Framework, RESTful APIs, Redux, CSS/SCSS, ETL workflows, React Native (preferred), and knowledge in Supply Chain systems (preferred) are desirable. Excellent problem-solving, communication, and documentation skills are essential. The ideal candidate for this role is technically sound, highly process-driven, possesses strong troubleshooting and analytical skills, is capable of independently owning technical solutions and leading implementations, and is ethical, principled, and effective in multi-cultural teams. A strong understanding of SDLC processes and Agile delivery is expected. This high-impact engineering role involves full lifecycle software development, close collaboration with distributed teams, and frequent engagement with DevOps practices and cloud services. You are expected to transform abstract business concepts into scalable technical solutions.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

noida, uttar pradesh

On-site

The Smart Cube, a WNS company, is seeking Assistant Managers who will collaborate with the Project Lead to design effective analytical frameworks aligned with client objectives. The Assistant Managers will translate requirements into clear deliverables, manage data preparation, perform quality checks, and ensure analysis readiness. They should possess expertise in implementing analytical techniques and machine learning methods such as regression, decision trees, segmentation, forecasting, and algorithms like Random Forest, SVM, and ANN. Additionally, they are responsible for sanity checks, quality control, and interpreting results in a business context to identify actionable insights. Assistant Managers will independently handle client communications, interact with onsite leads, and manage the entire project lifecycle from initiation to delivery. This includes translating business requirements into technical specifications, overseeing data teams, ensuring data integrity, and facilitating communication between business and technical stakeholders. They will lead process improvements in analytics and act as project leads for cross-functional coordination. In terms of client management, Assistant Managers will serve as client leads, maintain strong relationships, participate in deliverable discussions, and guide project teams on execution strategies. Proficiency in connecting databases with Knime, understanding SQL concepts, and designing Knime ETL workflows to support BI tools is required. They must also be proficient in PowerBI for building dashboards and supporting data-driven decision-making. Knowledge of leading analytics projects using PowerBI, Python, and SQL to generate insights is essential. Ideal candidates should have 4-7 years of experience in advanced analytics across Marketing, CRM, or Pricing in Retail or CPG. Experience in other B2C domains is also acceptable. Proficiency in handling large datasets using Python, R, or SAS, and experience with multiple analytics or machine learning techniques is required. Candidates should have a good understanding of consumer sectors such as Retail, CPG, or Telecom, and experience with various data formats and platforms including flat files, RDBMS, Knime workflows and server, SQL Server, Teradata, Hadoop, and Spark. Strong written and verbal communication skills are essential for creating client-ready deliverables using Excel and PowerPoint. Basic knowledge of statistical and machine learning techniques like regression, clustering, decision trees, forecasting, and other ML models is also necessary. Knowledge of optimization methods, supply chain concepts, VBA, Excel Macros, Tableau, and Qlikview will be an added advantage. Qualifications: - Engineers from top tier institutes (IITs, DCE/NSIT, NITs) or Post Graduates in Maths/Statistics/OR from top Tier Colleges/Universities - MBA from top tier B-schools,

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

As a Data Engineer at our company, you will be an integral part of the team responsible for providing critical monitoring and support for our production data environment. You will work closely with internal and external teams to ensure operational stability, triage incidents, and maintain consistent coverage in a 7-day-a-week support role. Your responsibilities will include proactively monitoring and ensuring the successful execution of scheduled ETL jobs, troubleshooting and resolving issues in data pipelines and SQL environment, and coordinating with IT teams and vendors to address infrastructure-related issues. You will follow and maintain Run Books and SOPs, support data engineering tasks, and document detailed incident reports to ensure system uptime and minimize downtime. To excel in this role, you should have 3-5+ years of experience in MS SQL Server administration and development, strong skills in SSIS and T-SQL, and proven experience in supporting ETL workflows and handling production incidents. Familiarity with SQL Agent job monitoring, excellent communication and collaboration skills, and the ability to follow operational processes and escalation protocols are essential. This is a full-time position with benefits including a flexible schedule, health insurance, life insurance, paid time off, and the opportunity to work from home. The shift timing is from 9:30 AM to 5:30 PM IST, 7 days a week, with coverage provided by two developers working in rotation. If you are passionate about data engineering and have the required skills and experience, we encourage you to apply for this position. Required Skills: - 3-5+ years of experience in MS SQL Server administration and development - Strong proficiency with SSIS and T-SQL - Proven experience supporting ETL workflows and handling production incidents - Familiarity with SQL Agent job monitoring and logging - Excellent communication and collaboration skills - Ability to follow structured operational processes and escalation protocols Education: Bachelor's degree required Experience: 5 years of relevant work preferred Join us and be part of a dynamic team dedicated to maintaining the operational stability of our production data environment.,

Posted 1 week ago

Apply

0.0 - 4.0 years

0 Lacs

hyderabad, telangana

On-site

As an Intern at our company, you will be responsible for developing, testing, and maintaining data pipelines using PySpark and Databricks. Your role will involve assisting in the construction of ETL workflows and integrating data from various sources through Azure Synapse Analytics. It will be imperative for you to ensure data quality, integrity, and consistency across all systems. Additionally, you will have the opportunity to contribute to documentation and participate in the performance tuning of data solutions. Our company, FEG, stands as one of the largest omnichannel betting and gaming operators in Central and Eastern Europe. Being a digital-first business, technology is a fundamental component in how we engage with our customers, execute marketing campaigns, and oversee internal operations. Technology is intricately woven into the fabric of our operations. FEG India serves as our emerging technology hub, delivering top-notch solutions that bolster FEG's global operations. With a specialized team skilled in data management, business intelligence, analytics, AI/ML, software development, testing, and IT services, FEG India is pivotal in propelling innovation and excellence throughout the organization.,

Posted 1 week ago

Apply

2.0 - 6.0 years

0 Lacs

haryana

On-site

As a member of our Non-Financial Risk team at Macquarie, you will be involved in embedding the Operational Risk Management Framework across the organization, encompassing financial regulatory reporting and financial statutory reporting risk. We are seeking individuals who have a keen interest in analytics and reporting within the realm of risk management. At Macquarie, we take pride in our ability to bring together a diverse group of individuals and empower them to explore a multitude of possibilities. Operating in 31 markets globally with 56 years of continuous profitability, we offer a supportive and collaborative environment where every team member, regardless of their role, is encouraged to contribute ideas and drive outcomes. Your primary responsibilities in this role will involve collaborating with regional and central teams to establish the Leadership Committee risk profile and providing insightful reporting on risk profiles utilizing data analytics. Additionally, you will be tasked with supporting the automation of existing reports and identifying opportunities for enhanced reporting through visualization tools and dashboard creation. To excel in this role, we are looking for individuals who possess expertise in data models, data warehousing, and segmentation techniques. Strong analytical skills with a keen attention to detail and accuracy are essential, along with proficiency in Business Intelligence tools such as Tableau and Power BI. Advanced experience in Excel, VBA, and SQL, including Impala, Starburst Pesto, and Hue, is highly desirable. Furthermore, the ability to design, develop, validate, and troubleshoot ETL workflows in Alteryx with a minimum of 2 years of experience is preferred. If you are inspired to contribute to building a better future with us and are enthusiastic about the role or the opportunity to work at Macquarie, we encourage you to apply and share your unique perspective with us. Financial Management, People and Engagement (FPE) serves as a consolidated interface for Macquarie's businesses across key areas of people, strategy, communications, and financial management. Comprising two pillars - Financial Management, and People and Engagement, FPE is responsible for overseeing the Group's financial, tax, and treasury activities, as well as strategic priorities. Additionally, it plays a crucial role in fostering our culture through people and community engagement strategies, while engaging with stakeholders to safeguard and enhance Macquarie's global reputation. At Macquarie, we are committed to promoting diversity, equity, and inclusion. We strive to provide reasonable accommodations to individuals who may require support during the recruitment process and in their working arrangements. If you need additional assistance, please do not hesitate to inform us during the application process.,

Posted 2 weeks ago

Apply

4.0 - 5.0 years

5 - 8 Lacs

Bengaluru, Karnataka, India

On-site

Key Responsibilities: Design, develop, and maintain ETL workflows and pipelines using Python Extract data from various sources (databases, APIs, flat files) and perform data transformations to meet business requirements Load processed data into target systems such as data warehouses, data lakes, or databases Optimize ETL processes for performance, scalability, and reliability Collaborate with data architects and analysts to understand data requirements and design solutions Implement data validation and error-handling mechanisms to ensure data quality Automate routine ETL tasks and monitoring using scripting and workflow tools Document ETL processes, data mappings, and technical specifications Troubleshoot and resolve issues in ETL workflows promptly Follow data governance, security policies, and compliance standards Required Skills: 4 to 5 years of hands-on experience in Python programming for ETL development Strong knowledge of ETL concepts and data integration best practices Experience with ETL frameworks/libraries such as Airflow, Luigi, Apache NiFi, Pandas , or similar Proficiency in SQL and working with relational databases (Oracle, MySQL, SQL Server, etc.) Familiarity with data formats like JSON, XML, CSV, Parquet Experience in cloud platforms and tools such as AWS Glue, Azure Data Factory, or GCP Dataflow is a plus Understanding of data warehousing concepts and architectures (star schema, snowflake schema) Experience with version control tools such as Git Knowledge of containerization (Docker) and CI/CD pipelines is desirable Preferred Qualifications: Experience working with big data technologies such as Hadoop, Spark, or Kafka Familiarity with NoSQL databases (MongoDB, Cassandra) Experience with data visualization and reporting tools Certification in Python or Data Engineering tools Knowledge of Agile methodologies and working in collaborative teams Soft Skills: Strong analytical and problem-solving skills Excellent communication and collaboration abilities Detail-oriented and committed to delivering high-quality work Ability to manage multiple tasks and meet deadlines Proactive and eager to learn new technologies and tools

Posted 2 weeks ago

Apply

1.0 - 5.0 years

0 Lacs

karnataka

On-site

The company Loyalytics is a rapidly growing Analytics consulting and product organization headquartered in Bangalore. They specialize in assisting large retail clients worldwide to capitalize on their data assets through consulting projects and product accelerators. With a team of over 100 analytics practitioners, Loyalytics is at the forefront of utilizing cutting-edge tools and technologies in the industry. The technical team at Loyalytics comprises data scientists, data engineers, and business analysts who handle over 1 million data points daily. The company operates in a massive multi-billion dollar global market opportunity and boasts a leadership team with a combined experience of over 40 years. Loyalytics has gained a strong reputation in the market, with word-of-mouth and referral-driven marketing strategies that have attracted prestigious retail brands in the GCC regions like Lulu and GMG. One of the key distinguishing factors of Loyalytics is its 10-year history as a bootstrapped company that continues to expand its workforce, currently employing over 100 individuals. They are now seeking a passionate and detail-oriented BI Consultant Tableau with 1-2 years of experience to join their analytics team. The ideal candidate for this role should have a solid foundation in SQL and hands-on expertise in developing dashboards using Tableau. Responsibilities include designing, developing, and maintaining interactive dashboards and reports, writing efficient SQL queries, collaborating with cross-functional teams, ensuring data accuracy, and optimizing dashboard performance. Strong analytical and problem-solving skills, along with good communication and documentation abilities, are essential for success in this position. Required skills and qualifications for the BI Consultant Tableau role at Loyalytics include 1-2 years of professional experience in BI/Data Analytics roles, proficiency in writing complex SQL queries, hands-on experience with Tableau Desktop, understanding of data modeling concepts and ETL workflows, familiarity with other BI tools like Power BI and Qlik, exposure to Tableau Server or Tableau Cloud, and knowledge of cloud platforms or databases such as AWS, GCP, Azure, Snowflake, or BigQuery. This is an exciting opportunity to join a dynamic and innovative team at Loyalytics and contribute to transforming data into valuable insights for clients in the retail industry.,

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

chennai, tamil nadu

On-site

As a Data Engineer, your primary responsibility will be to design and develop robust ETL pipelines using Python, PySpark, and various Google Cloud Platform (GCP) services. You will be tasked with building and optimizing data models and queries in BigQuery to support analytics and reporting needs. Additionally, you will play a crucial role in ingesting, transforming, and loading structured and semi-structured data from diverse sources. Collaboration with data analysts, scientists, and business teams is essential to grasp and address data requirements effectively. Ensuring data quality, integrity, and security across cloud-based data platforms will be a key part of your role. You will also be responsible for monitoring and troubleshooting data workflows and performance issues. Automation of data validation and transformation processes using scripting and orchestration tools will be a significant aspect of your day-to-day tasks. Your hands-on experience with Google Cloud Platform (GCP), particularly BigQuery, will be crucial. Proficiency in Python and/or PySpark programming, along with experience in designing and implementing ETL workflows and data pipelines, is required. A strong command of SQL and data modeling for analytics is essential. Familiarity with GCP services like Cloud Storage, Dataflow, Pub/Sub, and Composer will be beneficial. An understanding of data governance, security, and compliance in cloud environments is also expected. Experience with version control using Git and agile development practices will be advantageous for this role.,

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

chennai, tamil nadu

On-site

As a Data Engineer, you will be responsible for designing and developing robust ETL pipelines using Python, PySpark, and Google Cloud Platform (GCP) services. Your role will involve building and optimizing data models and queries in BigQuery for analytics and reporting purposes. You will also be responsible for ingesting, transforming, and loading structured and semi-structured data from various sources. Collaboration with data analysts, scientists, and business teams to comprehend data requirements will be a key aspect of your job. Ensuring data quality, integrity, and security across cloud-based data platforms is crucial. Monitoring and troubleshooting data workflows and performance issues will also be part of your responsibilities. Automation of data validation and transformation processes using scripting and orchestration tools will be an essential aspect of your role. You are required to have hands-on experience with Google Cloud Platform (GCP), especially BigQuery. Strong programming skills in Python and/or PySpark are necessary for this position. Your experience in designing and implementing ETL workflows and data pipelines will be valuable. Proficiency in SQL and data modeling for analytics is required. Familiarity with GCP services such as Cloud Storage, Dataflow, Pub/Sub, and Composer is preferred. Understanding data governance, security, and compliance in cloud environments is essential. Experience with version control tools like Git and agile development practices will be beneficial for this role. If you are looking for a challenging opportunity to work on cutting-edge data engineering projects, this position is ideal for you.,

Posted 2 weeks ago

Apply
Page 1 of 2
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies