Jobs
Interviews

1655 Adf Jobs - Page 21

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities Design, develop, and implement data models and ETL processes for Power BI solutions Be able to understand and create Test Scripts for data validation as it moves through various lifecycles in cloud-based technologies Be able to work closely with business partners and data SMEs to understand Healthcare Quality Measures and its related business requirements Conduct data validation after major/minor enhancements in project and determine the best data validation techniques to implement Communicate effectively with leadership and analysts across teams Troubleshoot and resolve issues with Jobs/pipelines/overhead Ensure data accuracy and integrity between sources and consumers Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Graduate degree or equivalent (B.Tech./MCA preferred) with overall 3+ years of work experience 3+ years of advanced understanding to at least one programming language - Python, Spark, Scala Experience of working with Cloud technologies preferably Snowflake, ADF and Databricks Experience of working with Agile Methodology (preferably in Rally) Knowledge of Unix Shell Scripting for automation & scheduling Batch Jobs Knowledge of Configuration Management - Github Knowledge of Relational Databases - SQL Server, Oracle, Teradata, IBM DB2, MySQL Knowledge of Messaging Queues - Kafka/ActiveMQ/RabbitMQ Knowledge of CI/CD Tools - Jenkins Understanding Relational Database Model & Entity Relation diagrams Proven solid communication and interpersonal skills Proven excellent written and verbal communication skills with ability to provide clear explanations and overviews to others (internal and external) of their work efforts Proven solid facilitation, critical thinking, problem solving, decision making and analytical skills Demonstrated ability to prioritize and manage multiple tasks At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.

Posted 3 weeks ago

Apply

10.0 years

0 Lacs

India

On-site

About Company Papigen is a fast-growing global technology services company, delivering innovative digital solutions through deep industry experience and cutting-edge expertise. We specialize in technology transformation, enterprise modernization, and dynamic areas like Cloud, Big Data, Java, React, DevOps, and more. Our client-centric approach combines consulting, engineering, and data science to help businesses evolve and scale efficiently. Job Description We are seeking a Senior Software Engineer with strong experience in .NET, Microsoft Azure, and Identity Access Management (IAM) technologies. The ideal candidate will have deep technical knowledge and proven success in designing, developing, and maintaining enterprise-grade applications with a focus on performance, scalability, and security. Key Responsibilities Design and implement scalable software using C#, .NET, and related technologies. Develop technical documentation including high-level/low-level designs and UML diagrams. Build and maintain secure APIs and microservices using Azure services (App Service, APIM, ADF, AppInsights). Implement fine-grained authorization policies using IAM tools (e.g., PlainID, Azure AD/Entra ID). Design, optimize, and manage relational and non-relational databases (SQL Server, Oracle, PostgreSQL). Integrate and support cloud-based deployments using Azure DevOps CI/CD pipelines. Provide production support, perform root cause analysis, and resolve issues proactively. Participate in Agile ceremonies, sprint planning, stand-ups, and retrospectives. Required Skills & Experience 10+ years of experience in software development, system design, and architecture. Proficient in C#, .NET, JavaScript, and Python. Strong experience with RESTful/SOAP APIs, API Management, and Swagger/OpenAPI. Hands-on with Azure services: Logic Apps, DevOps, ADF, APIM, Databricks (desired). Skilled in writing complex SQL queries, stored procedures, and optimizing database performance. Familiarity with IAM tools like PlainID and Azure Entra ID (formerly AD). Experience with SharePoint development and Power Automate. Excellent communication, documentation, and cross-functional collaboration skills. Benefits & Perks Opportunity to work with leading global clients Exposure to modern technology stacks and tools Supportive and collaborative team environment Continuous learning and career development opportunities Skills: javascript,databricks,database management,plainid,ci/cd,soap apis,.net,apim,system design,agile methodologies,sql,python,postgresql,microsoft azure,power automate,sharepoint,oracle,identity access management (iam),restful apis,orm,azure,uml,c#,azure devops,api management,azure ad,iam,microservices

Posted 3 weeks ago

Apply

3.0 years

0 Lacs

India

Remote

WHO WE ARE: Beyondsoft is a leading mid-sized business IT and consulting company that combines modern technologies and proven methodologies to tailor solutions that move your business forward. Our global head office is based in Singapore, and our team is made up of a diversely talented team of experts who thrive on innovation and pushing the bounds of technology to solve our customers’ most pressing challenges. When it comes time to deliver, we set our sights on that sweet spot where brilliance, emerging technologies, best practices, and accountability converge. We have a global presence spanning four continents (North America, South America, Europe, and Asia). Our global network of talent and customer-centric engagement model enables us to provide top-quality services on an unprecedented scale. WHAT WE’RE ABOUT: We believe that collaboration, transparency, and accountability are the values that guide our business, our delivery, and our brand. Everyone has something to bring to the table, and we believe in working together with our peers and clients to leverage the best of one another in everything we do. When we proactively collaborate, business decisions become easier, innovation is greater, and outcomes are better. Our ability to achieve our mission and live out our values depends upon a diverse, equitable, and inclusive culture. So, we strive to foster a workplace where people have the respect, support, and voice they deserve, where innovative ideas flourish, and where people can unleash their brilliance. For more information regarding DEI at Beyondsoft, please go to https://www.beyondsoft.com/diversity/. POSITION SUMMARY: As a Data Engineer, you will be responsible for designing, building, and optimizing scalable data pipelines and infrastructure. You’ll work closely with analytics, engineering, and product teams to ensure data integrity and enable high-impact decision-making. This position requires flexibility to work in PST timezone. ADDITIONAL REQUIREMENT FOR REMOTE POSITIONS: For remote positions, all candidates must complete a video screen with our corporate recruiting team. WHAT YOU WILL BE DOING: Maintain automated data onboarding and diagnostic tools for AIP partners Monitor ADF pipelines and mitigate pipeline runs as needed Maintain Privacy Dashboard and Bing user interests for Bing Growth team usage Participate and resolve live sites in related areas Data Platform development and maintenance, Notebook based processing pipelines and MT migration Manage the regular data quality Cosmos/MT jobs Online tooling and support such as DADT tools Watch out the abnormal pattern, perform ad-hoc data quality analysis, investigate daily the user ad click broken cases Perform additional duties as assigned. MINIMUM QUALIFICATIONS: Bachelor’s degree or higher in Computer Science or a related field. At least 3 years of experience in software development. Good quality software development and understanding. Ability to quickly communicate across time zones. Excellent written and verbal communication skills in English Self-motivated Coding Language: Java, C#, Python, Scala Technologies: Apache Spark, Apache Flink, Apache Kafka, Hadoop, Cosmos, SQL Azure resource management: Azure Data Factory, Azure Databricks, Azure Key vaults, Managed Identity, Azure Storage, etc. MS Project Big data experience is a plus Occasional infrequent in person activity may be required WHAT WE HAVE TO OFFER: Because we know how important our people are to the success of our clients, it’s a priority to make sure we stay committed to our employees and making Beyondsoft a great place to work. We take pride in offering competitive compensation and benefits along with a company culture that embodies continuous learning, growth, and training with a dedicated focus on employee satisfaction and work/life balance. Beyondsoft provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type with regards to race, color, religion, age, sex, national origin, disability status, genetics, veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state, or local laws. This policy applies to all terms and conditions of employment, including recruiting, hiring, and the full employee lifecycle up through and including termination.

Posted 3 weeks ago

Apply

8.0 years

0 Lacs

India

On-site

About Company Papigen is a fast-growing global technology services company, delivering innovative digital solutions through deep industry experience and cutting-edge expertise. We specialize in technology transformation, enterprise modernization, and dynamic areas like Cloud, Big Data, Java, React, DevOps, and more. Our client-centric approach combines consulting, engineering, and data science to help businesses evolve and scale efficiently. About The Role We are seeking a Senior Data Engineer to join our growing cloud data team. In this role, you will design and implement scalable data pipelines and ETL processes using Azure Databricks , Azure Data Factory , PySpark , and Spark SQL . You’ll work with cross-functional teams to develop high-quality, secure, and efficient data solutions in a data lakehouse architecture on Azure. Key Responsibilities Design, develop, and optimize scalable data pipelines using Databricks, ADF, PySpark, Spark SQL, and Python Build robust ETL workflows to transform and load data into a lakehouse architecture on Azure Ensure data quality, security, and compliance with data governance and privacy standards Collaborate with stakeholders to gather business requirements and deliver technical data solutions Create and maintain technical documentation for workflows, architecture, and data models Work within an Agile environment and track tasks using tools like Azure DevOps Required Skills & Experience 8+ years of experience in data engineering and enterprise data platform development Proven expertise in Azure Databricks, Azure Data Factory, PySpark, and Spark SQL Strong understanding of Data Warehouses, Data Marts, and Operational Data Stores Proficient in writing complex SQL / PL-SQL queries and understanding data models and data lineage Knowledge of data management best practices: data quality, lineage, metadata, reference/master data Experience working in Agile teams with tools like Azure DevOps Strong problem-solving skills, attention to detail, and the ability to multi-task effectively Excellent communication skills for interacting with both technical and business teams Benefits And Perks Opportunity to work with leading global clients Exposure to modern technology stacks and tools Supportive and collaborative team environment Continuous learning and career development opportunities Skills: lineage,data modeling,pyspark,metadata,spark sql,data marts,azure databricks,sql,azure data factory,pl-sql,spark,pl/sql,adf,data governance,python,data warehouses

Posted 3 weeks ago

Apply

7.0 - 12.0 years

15 - 25 Lacs

Pune, Chennai, Bengaluru

Hybrid

Azure Data Engineer - ADF, ADB, Pyspark, SQL Interested candidates please share resume with below details on juisagars@hexaware.com Total Experience: Relevant Experience: Current company: Current CTC: Expected CTC: Notice Period: Current location: Preferred location :

Posted 4 weeks ago

Apply

5.0 years

0 Lacs

Hyderābād

On-site

Job Description: Senior/Azure Data Engineer Job Location: Hyderabad / Bangalore / Chennai / Kolkata / Noida/ Gurgaon / Pune / Indore / Mumbai At least 5+ years’ of relevant hands on development experience as Azure Data Engineering role Proficient in Azure technologies like ADB, ADF, SQL(capability of writing complex SQL queries), ADB, PySpark, Python, Synapse, Delta Tables, Unity Catalog Hands on in Python, PySpark or Spark SQL Hands on in Azure Analytics and DevOps Taking part in Proof of Concepts (POCs) and pilot solutions preparation Ability to conduct data profiling, cataloguing, and mapping for technical design and construction of technical data flows Experience in business processing mapping of data and analytics solutions At DXC Technology, we believe strong connections and community are key to our success. Our work model prioritizes in-person collaboration while offering flexibility to support wellbeing, productivity, individual work styles, and life circumstances. We’re committed to fostering an inclusive environment where everyone can thrive. Recruitment fraud is a scheme in which fictitious job opportunities are offered to job seekers typically through online services, such as false websites, or through unsolicited emails claiming to be from the company. These emails may request recipients to provide personal information or to make payments as part of their illegitimate recruiting process. DXC does not make offers of employment via social media networks and DXC never asks for any money or payments from applicants at any point in the recruitment process, nor ask a job seeker to purchase IT or other equipment on our behalf. More information on employment scams is available here .

Posted 4 weeks ago

Apply

5.0 years

0 Lacs

Bengaluru

On-site

Job Description: Senior Data Engineer (Azure, Snowflake, ADF) Job Location: Hyderabad / Bangalore / Chennai / Kolkata / Noida/ Gurgaon / Pune / Indore / Mumbai Key Responsibilities: Data Integration & Orchestration: Integrate with Snowflake for scalable data storage and retrieval. Use Azure Data Factory (ADF) and Function Apps for orchestrating and transforming data pipelines. Streaming & Messaging: 5+ years of experience in ML/AI/DevOps engineering, including Edge deployment. Strong proficiency in OpenShift, Azure ML, and Terraform. Hands-on experience with Kafka, Snowflake, and Function Apps. Proven experience with CI/CD pipelines, preferably Azure DevOps and Argo. Good understanding of monitoring tools (Prometheus, Grafana, AppInsights). Experience in secure deployments and managing private endpoints in Azure. At DXC Technology, we believe strong connections and community are key to our success. Our work model prioritizes in-person collaboration while offering flexibility to support wellbeing, productivity, individual work styles, and life circumstances. We’re committed to fostering an inclusive environment where everyone can thrive. Recruitment fraud is a scheme in which fictitious job opportunities are offered to job seekers typically through online services, such as false websites, or through unsolicited emails claiming to be from the company. These emails may request recipients to provide personal information or to make payments as part of their illegitimate recruiting process. DXC does not make offers of employment via social media networks and DXC never asks for any money or payments from applicants at any point in the recruitment process, nor ask a job seeker to purchase IT or other equipment on our behalf. More information on employment scams is available here .

Posted 4 weeks ago

Apply

150.0 years

3 - 10 Lacs

Bengaluru

On-site

Your Job You’re not the person who will settle for just any role. Neither are we. Because we’re out to create Better Care for a Better World, and that takes a certain kind of person and teams who care about making a difference. Here, you’ll bring your professional expertise, talent, and drive to building and managing our portfolio of iconic, ground-breaking brands. In your role, you’ll help us deliver better care for billions of people around the world. It starts with YOU. About Us Huggies®. Kleenex®. Cottonelle®. Scott®. Kotex®. Poise®. Depend®. Kimberly-Clark Professional®. You already know our legendary brands—and so does the rest of the world. In fact, millions of people use Kimberly-Clark products every day. We know these amazing Kimberly-Clark products wouldn’t exist without talented professionals, like you. At Kimberly-Clark, you’ll be part of the best team committed to driving innovation, growth and impact. We’re founded on 150 years of market leadership, and we’re always looking for new and better ways to perform – so there’s your open door of opportunity. It’s all here for you at Kimberly-Clark; you just need to log on! Led by Purpose. Driven by You. About You You’re driven to perform at the highest level possible, and you appreciate a performance culture fueled by authentic caring. You want to be part of a company actively dedicated to sustainability, inclusion, wellbeing, and career development. You love what you do, especially when the work you do makes a difference. At Kimberly-Clark, we’re constantly exploring new ideas on how, when, and where we can best achieve results. When you join our team, you’ll experience Flex That Works: flexible (hybrid) work arrangements that empower you to have purposeful time in the office and partner with your leader to make flexibility work for both you and the business. Our Data Engineers play a crucial role in designing and operationalizing transformational enterprise data solutions on Cloud Platforms, integrating Azure services, Snowflake technology, and other third-party data technologies. Cloud Data Engineers will work closely with a multidisciplinary agile team to build high-quality data pipelines that drive analytic solutions. These solutions will generate insights from our connected data, enabling Kimberly-Clark to advance its data-driven decision-making capabilities. The ideal candidate will have a deep understanding of data architecture, data engineering, data warehousing, data analysis, reporting, and data science techniques and workflows. They should be skilled in creating data products that support analytic solutions and possess proficiency in working with APIs and understanding data structures to serve them. Experience in using ADF (Azure Data Factory) for orchestrating and automating data movement and transformation. Additionally, expertise in data visualization tools, specifically PowerBI, is required. The candidate should have strong problem-solving skills, be able to work as part of a technical, cross-functional analytics team, and be an agile learner with a passion for solving complex data problems and delivering insights. If you are an agile learner, possess strong problem-solving skills, can work as part of a technical, cross-functional analytics team, and want to solve complex data problems while delivering insights that help enable our analytics strategy, we would like to hear from you. This role is perfect for a developer passionate about leveraging cutting-edge technologies to create impactful digital products that connect with and serve our clients effectively. Kimberly-Clark has an amazing opportunity to continue leading the market, and DTS is poised to deliver compelling and robust digital capabilities, products, and solutions to support it. This role will have substantial influence in this endeavor. If you are excited to make a difference applying cutting-edge technologies to solve real business challenges and add value to a global, market-leading organization, please come join us! Scope/Categories: Role will report to the Data & Analytics Engineer Manager and Product Owner. Key Responsibilities: Design and operationalize enterprise data solutions on Cloud Platforms : Develop and implement scalable and secure data solutions on cloud platforms, ensuring they meet enterprise standards and requirements. This includes designing data architecture, selecting appropriate cloud services, and optimizing performance for data processing and storage. Integrate Azure services, Snowflake technology, and other third-party data technologies: Seamlessly integrate various data technologies, including Azure services, Snowflake, and other third-party tools, to create a cohesive data ecosystem. This involves configuring data connectors, ensuring data flow consistency, and managing dependencies between different systems. Build and maintain high-quality data pipelines for analytic solutions: Develop robust data pipelines that automate the extraction, transformation, and loading (ETL) of data from various sources into a centralized data warehouse or lake. Ensure these pipelines are efficient, reliable, and capable of handling large volumes of data. Collaborate with a multidisciplinary agile team to generate insights from connected data Work closely with data scientists, analysts, and other team members in an agile environment to translate business requirements into technical solutions. Participate in sprint planning, stand-ups, and retrospectives to ensure timely delivery of data products. Manage and create data inventories for analytics and APIs to be consumed : Develop and maintain comprehensive data inventories that catalog available data assets and their metadata. Ensure these inventories are accessible and usable by various stakeholders, including through APIs that facilitate data consumption. Design data integrations with internal and external products : Architect and implement data integration solutions that enable seamless data exchange between internal systems and external partners or products. This includes ensuring data integrity, security, and compliance with relevant standards. Build data visualizations to support analytic insights : Create intuitive and insightful data visualizations using tools like PowerBI, incorporating semantic layers to provide a unified view of data and help stakeholders understand complex data sets and derive actionable insights. Required Skills and Experience: Proficiency with Snowflake Ecosystem : Demonstrated ability to use Snowflake for data warehousing, including data ingestion, transformation, and querying. Proficiency in using Snowflake's features for scalable data processing, including the use of Snowpipe for continuous data ingestion and Snowflake's SQL capabilities for data transformation. Ability to optimize Snowflake performance through clustering, partitioning, and other best practices. Azure Data Factory (ADF): Experience in using ADF for orchestrating and automating data movement and transformation within the Azure ecosystem. Proficiency in programming languages such as SQL, NoSQL, Python, Java, R, and Scala: Strong coding skills in multiple programming languages used for data manipulation, analysis, and pipeline development. Experience with ETL (extract, transform, and load) systems and API integrations: Expertise in building and maintaining ETL processes to consolidate data from various sources into centralized repositories, and integrating APIs for seamless data exchange. Understanding of data architecture, data engineering, data warehousing, data analysis, reporting, and data science techniques and workflows : You should have a comprehensive knowledge of designing and implementing data systems that support various analytic and operational use cases, including data storage, processing, and retrieval. Basic understanding of machine learning concepts to support data scientists on the team: Familiarity with key machine learning principles and techniques to better collaborate with data scientists and support their analytical models. Strong problem-solving skills and ability to work as part of a technical, cross-functional analytics team : Excellent analytical and troubleshooting abilities, with the capability to collaborate effectively with team members from various technical and business domains. Skilled in creating data products that support analytic solutions: Proficiency in developing data products that enable stakeholders to derive meaningful insights and make data-driven decisions. This involves creating datasets, data models, and data services tailored to specific business needs. Experience in working with APIs and understanding data structures to serve them: Experience in designing, developing, and consuming APIs for data access and integration. This includes understanding various data structures and formats used in API communication. Knowledge of managing sensitive data, ensuring data privacy and security: Expertise in handling sensitive data with strict adherence to data privacy regulations and security best practices to protect against unauthorized access and breaches. Agile learner with a passion for solving complex data problems and delivering insights: A proactive and continuous learner with enthusiasm for addressing challenging data issues and providing valuable insights through innovative solutions. Experience with CPG Companies and POS Data: Experience in analyzing and interpreting POS data to provide actionable insights for CPG companies, enhancing their understanding of consumer behavior and optimizing sales strategies. Knowledge and Experience Bachelor’s degree in management information systems/technology, Computer Science, Engineering, or related discipline. MBA or equivalent is preferred. 7+ years of experience in designing large-scale data solutions, performing design assessments, crafting design options and analysis, finalizing preferred solution choice working with IT and Business stakeholders. 5+ years of experience tailoring, configuring, and crafting solutions within the Snowflake environment, including a profound grasp of Snowflake's data warehousing capabilities, data architecture, SQL optimization for Snowflake, and leveraging Snowflake's unique features such as Snowpipe, Streams, and Tasks for real-time data processing and analytics. A strong foundation in data migration strategies, performance tuning, and securing data within the Snowflake ecosystem is essential. 3+ years demonstrated expertise in architecting solutions within the Snowflake ecosystem, adhering to best practices in data architecture and design patterns. 7+ years of data engineering or design experience, designing, developing, and deploying scalable enterprise data analytics solutions from source system through ingestion and reporting. Expertise in data modeling principles/methods including, Conceptual, Logical & Physical Data Models for data warehouses, data lakes and/or database management systems. 5+ years of hands-on experience designing, building, and operationalizing data solutions and applications using cloud data and analytics services in combination with 3rd parties. 7+ years of hands-on relational, dimensional, and/or analytic experience (using RDBMS, dimensional, NoSQL data platform technologies, and ETL and data ingestion protocols). 7+ years of experience with database development and scripting. Professional Skills: Strong communication and interpersonal skills. Strong analytical and problem-solving skills and passion for product development. Strong understanding of Agile methodologies and open to working in agile environments with multiple stakeholders. Professional attitude and service orientation; team player. Ability to translate business needs into potential analytics solutions. Strong work ethic, ability to work at an abstract level and gain consensus. Ability to build a sense of trust and rapport to create a comfortable and effective workplace. Self-starter who can see the big picture, prioritize work to make the largest impact on the business and customer's vision and requirements. Fluency in English. To Be Considered Click the Apply button and complete the online application process. A member of our recruiting team will review your application and follow up if you seem like a great fit for this role. In the meantime, check out the career’s website. You’ll want to review this and come prepared with relevant questions when you pass GO and begin interviews. For Kimberly-Clark to grow and prosper, we must be an inclusive organization that applies the diverse experiences and passions of its team members to brands that make life better for people all around the world. We actively seek to build a workforce that reflects the experiences of our consumers. When you bring your original thinking to Kimberly-Clark, you fuel the continued success of our enterprise. We are a committed equal opportunity employer, and all qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability status, protected veteran status, sexual orientation, gender, identity, age, pregnancy, genetic information, citizenship status, or any other characteristic protected by law. The statements above are intended to describe the general nature and level of work performed by employees assigned to this classification. Statements are not intended to be construed as an exhaustive list of all duties, responsibilities and skills required for this position. Additional information about the compensation and benefits for this role are available upon request. You may contact kcchrprod@service-now.com for assistance. You must include the six-digit Job # with your request. This role is available for local candidates already authorized to work in the role’s country only. Kimberly-Clark will not provide relocation support for this role. .

Posted 4 weeks ago

Apply

7.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Candidates ready to join immediately can share their details via email for quick processing. 📌 CCTC | ECTC | Notice Period | Location Preference nitin.patil@ust.com Act fast for immediate attention! ⏳📩 Key Responsibilities: Data Extraction: Extract data from diverse sources while ensuring accuracy and completeness. Data Transformation: Perform data cleaning, validation, and apply business rules to transform raw data into a structured format for analysis. Data Loading: Load transformed data into target systems and design efficient data models and workflows. ETL Process Management: Design, develop, implement, and maintain ETL processes to integrate data efficiently into data warehouses or analytics platforms. Performance Optimization: Optimize and tune ETL processes for performance improvements, monitor jobs, and troubleshoot production issues. Data Quality and Governance: Ensure the quality, integrity, and compliance of data according to organizational and regulatory standards. Collaboration & Documentation: Work with business stakeholders to understand data requirements, document ETL workflows, and ensure proper communication. Tool-Specific Responsibilities: Leverage DataStage for designing and building complex ETL jobs. Use Azure Data Factory for scalable cloud-based integration and orchestration. Develop and maintain solutions for Snowflake data warehousing. Utilize SQL Server to manage data extraction and transformation processes. Implement DataStage Sequencers , Parallel Jobs, Aggregators, Joins, Merges, Lookups, etc. Provide support in resolving integration-related production issues following the change management process. Key Focus: Ensuring efficient, accurate, and secure data flow for the organization’s data warehousing and analytics needs. Must-Have Skills: Education: Bachelor’s degree in Computer Science, Information Technology, Engineering, or a related field. ETL Tools: 7+ years of hands-on experience in DataStage (V8.5 or higher) . Expertise in DataStage V11.3 and 8.7 versions. Strong experience in DataStage design and parallel jobs (e.g., Aggregator, Merge, Lookup, Source dataset, Change Capture). Advanced knowledge of UNIX and shell scripting . Azure Data Factory (ADF): 3+ years of experience in designing, developing, and managing Azure Data Factory pipelines . Proficient in using ADF connectors for integration with different data sources and destinations. Experience in ADF Data Flows and pipeline orchestration. Database & SQL: 7+ years of experience in Microsoft SQL Server , including experience in writing and optimizing SQL queries . 3+ years of experience in DB2 UDB Administration and Support . Experience in creating and managing SQL Server Agent jobs and SSIS packages . Hands-on experience in Data warehousing solutions and data modeling with SQL Server. Data Quality & Governance: Ability to ensure high data integrity and governance throughout ETL processes. Good to Have Skills: Experience with Snowflake data warehouse solutions. Familiarity with cloud-based ETL tools and technologies. Knowledge of Kafka (Basic Understanding) for stream processing and integration. Experience with Report Solution/Design and building automated reports using SQL Server and other reporting tools. Experience with implementing Data Security and Compliance processes in ETL. Role Requirements: Problem-Solving Skills: Ability to troubleshoot issues related to ETL processes and data integration. Collaboration: Ability to work effectively in a cross-functional team with business analysts, data engineers, and other stakeholders. Attention to Detail: Strong focus on ensuring the accuracy and consistency of data throughout the ETL pipeline. Communication: Excellent communication skills for documentation and reporting purposes.

Posted 4 weeks ago

Apply

7.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Candidates ready to join immediately can share their details via email for quick processing. 📌 CCTC | ECTC | Notice Period | Location Preference nitin.patil@ust.com Act fast for immediate attention! ⏳📩 Key Responsibilities: Data Extraction: Extract data from diverse sources while ensuring accuracy and completeness. Data Transformation: Perform data cleaning, validation, and apply business rules to transform raw data into a structured format for analysis. Data Loading: Load transformed data into target systems and design efficient data models and workflows. ETL Process Management: Design, develop, implement, and maintain ETL processes to integrate data efficiently into data warehouses or analytics platforms. Performance Optimization: Optimize and tune ETL processes for performance improvements, monitor jobs, and troubleshoot production issues. Data Quality and Governance: Ensure the quality, integrity, and compliance of data according to organizational and regulatory standards. Collaboration & Documentation: Work with business stakeholders to understand data requirements, document ETL workflows, and ensure proper communication. Tool-Specific Responsibilities: Leverage DataStage for designing and building complex ETL jobs. Use Azure Data Factory for scalable cloud-based integration and orchestration. Develop and maintain solutions for Snowflake data warehousing. Utilize SQL Server to manage data extraction and transformation processes. Implement DataStage Sequencers , Parallel Jobs, Aggregators, Joins, Merges, Lookups, etc. Provide support in resolving integration-related production issues following the change management process. Key Focus: Ensuring efficient, accurate, and secure data flow for the organization’s data warehousing and analytics needs. Must-Have Skills: Education: Bachelor’s degree in Computer Science, Information Technology, Engineering, or a related field. ETL Tools: 7+ years of hands-on experience in DataStage (V8.5 or higher) . Expertise in DataStage V11.3 and 8.7 versions. Strong experience in DataStage design and parallel jobs (e.g., Aggregator, Merge, Lookup, Source dataset, Change Capture). Advanced knowledge of UNIX and shell scripting . Azure Data Factory (ADF): 3+ years of experience in designing, developing, and managing Azure Data Factory pipelines . Proficient in using ADF connectors for integration with different data sources and destinations. Experience in ADF Data Flows and pipeline orchestration. Database & SQL: 7+ years of experience in Microsoft SQL Server , including experience in writing and optimizing SQL queries . 3+ years of experience in DB2 UDB Administration and Support . Experience in creating and managing SQL Server Agent jobs and SSIS packages . Hands-on experience in Data warehousing solutions and data modeling with SQL Server. Data Quality & Governance: Ability to ensure high data integrity and governance throughout ETL processes. Good to Have Skills: Experience with Snowflake data warehouse solutions. Familiarity with cloud-based ETL tools and technologies. Knowledge of Kafka (Basic Understanding) for stream processing and integration. Experience with Report Solution/Design and building automated reports using SQL Server and other reporting tools. Experience with implementing Data Security and Compliance processes in ETL. Role Requirements: Problem-Solving Skills: Ability to troubleshoot issues related to ETL processes and data integration. Collaboration: Ability to work effectively in a cross-functional team with business analysts, data engineers, and other stakeholders. Attention to Detail: Strong focus on ensuring the accuracy and consistency of data throughout the ETL pipeline. Communication: Excellent communication skills for documentation and reporting purposes.

Posted 4 weeks ago

Apply

6.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About us: Where elite tech talent meets world-class opportunities! At Xenon7, we work with leading enterprises and innovative startups on exciting, cutting-edge projects that leverage the latest technologies across various domains of IT including Data, Web, Infrastructure, AI, and many others. Our expertise in IT solutions development and on-demand resources allows us to partner with clients on transformative initiatives, driving innovation and business growth. Whether it's empowering global organizations or collaborating with trailblazing startups, we are committed to delivering advanced, impactful solutions that meet today's most complex challenges. We are building a community of top-tier experts and we're opening the doors to an exclusive group of exceptional AI & ML Professionals ready to solve real-world problems and shape the future of intelligent systems. Structured Onboarding Process We ensure every member is aligned and empowered: Screening - We review your application and experience in Data & AI, ML engineering, and solution delivery Technical Assessment - 2-step technical assessment process that includes an interactive problem-solving test, and a verbal interview about your skills and experience Matching you to Opportunity - We explore how your skills align with ongoing projects and innovation tracks Who We're Looking For We are looking for a skilled and experienced Data Engineer with deep expertise in the Databricks ecosystem to join our data engineering team. You will be responsible for building, optimizing, and maintaining scalable data pipelines on Databricks, leveraging Delta Lake, PySpark, and cloud-native services (AWS, Azure, or GCP). You will collaborate with data scientists, analysts, and business stakeholders to ensure clean, high-quality, and governed data is available for analytics and machine learning use cases. Requirements 6+ years of experience as a Data Engineer, with at least 4 years hands-on with Databricks in production environments Proficient in PySpark and SQL for large-scale data processing Deep understanding of Delta Lake features: ACID transactions, schema enforcement, time travel, and vacuuming Experience working with cloud platforms: AWS (Glue, S3), Azure (Data Lake, ADF), or GCP (BigQuery, GCS) Hands-on experience with Databricks Auto Loader, Structured Streaming, and job scheduling Familiarity with Unity Catalog for multi-workspace governance and fine-grained data access Experience integrating with orchestration tools (Airflow, ADF) and using infrastructure-as-code for deployment Comfortable with version control and automation using Git, Databricks Repos, dbx, or Terraform Experience with performance tuning, Z-Ordering, caching strategies, and partitioning best practices Benefits At Xenon7, we're not just building AI systems—we're building a community of talent with the mindset to lead, collaborate, and innovate together. Ecosystem of Opportunity: You'll be part of a growing network where client engagements, thought leadership, research collaborations, and mentorship paths are interconnected. Whether you're building solutions or nurturing the next generation of talent, this is a place to scale your influence Collaborative Environment: Our culture thrives on openness, continuous learning, and engineering excellence. You'll work alongside seasoned practitioners who value smart execution and shared growth Flexible & Impact-Driven Work: Whether you're contributing from a client project, innovation sprint, or open-source initiative, we focus on outcomes—not hours. Autonomy, ownership, and curiosity are encouraged here Talent-Led Innovation: We believe communities are strongest when built around real practitioners. Our Innovation Community isn't just a knowledge-sharing forum—it's a launchpad for members to lead new projects, co-develop tools, and shape the direction of AI itself

Posted 4 weeks ago

Apply

0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Functional Responsibility Having sound knowledge of banking domain (Wholesale, retail, core banking, trade finance) In-depth understanding of RBI Regulatory reporting and guidelines including RBI ADF approach document. Should have an understanding of balance sheet and P&L. Supporting clients by providing user manuals, trainings, conducting workshops and preparing case studies. Process Adherence Review the initial and ongoing development of product Responsible for documenting, validating, communicating and coordinating requirements. Provide support to business development by preparing proposals, concept presentations and outreach activities Maintaining and updating tracker, reviewing test cases, providing training to internal as well as external stakeholders Client Management / Stakeholder Management Interact with clients in relation to assignment execution and manage operational relationships effectively Interact with client for requirement gathering, issue tracking, change request discussion, FRD writing and preparing project status reports People Development Co-ordinate with assignment-specific team of consultants, developers, QA and monitor performance to ensure timely and effective delivery

Posted 4 weeks ago

Apply

5.0 years

0 Lacs

Mysore, Karnataka, India

On-site

Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology Your Role And Responsibilities As an Data Engineer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets. In This Role, Your Responsibilities May Include Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise search applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviours. Build teams or writing programs to cleanse and integrate data in an efficient and reusable manner, developing predictive or prescriptive models, and evaluating modelling results Preferred Education Master's Degree Required Technical And Professional Expertise We are seeking a skilled Azure Data Engineer with 5+ years of experience Including 3+ years of hands-on experience with ADF/Databricks The ideal candidate Data bricks,Data Lake, Phyton programming skills. The candidate will also have experience for deploying to data bricks. Familiarity with Azure Data Factory Preferred Technical And Professional Experience Good communication skills. 3+ years of experience with ADF/DB/DataLake. Ability to communicate results to technical and non-technical audiences

Posted 4 weeks ago

Apply

7.0 - 10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Title: Databricks Engieer- Lead Primary skills: Databricks, PySpark, SQL Secondary skills: Advanced SQL, Azure Data Factory, and Azure Datalake. Mode of Work: Work from Office Location: Hyderabad Experience: 7 to 10 Years Responsibilities · Design and develop ETL pipelines using ADF for data ingestion and transformation. · Collaborate with Azure stack modules like Data Lakes and SQL DW to build robust data solutions. · Write SQL, Python, and PySpark code for efficient data processing and transformation. · Understand and translate business requirements into technical designs. · Develop mapping documents and transformation rules as per project scope. · Communicate project status with stakeholders, ensuring smooth project execution. Requirements · 7-10 years of experience in data ingestion, data processing, and analytical pipelines for big data and relational databases. · Hands-on experience with Azure services: ADLS, Azure Databricks, Data Factory, Synapse, Azure SQL DB. · Experience in SQL, Python, and PySpark for data transformation and processing. · Familiarity with DevOps and CI/CD deployments. · Strong communication skills and attention to detail in high-pressure situations. · Experience in the insurance or financial industry is preferred.

Posted 4 weeks ago

Apply

7.0 years

0 Lacs

India

On-site

Job Summary: We are seeking a technically strong and well-rounded ETL Developer with proven experience in Azure Data Factory (ADF) and Oracle Fusion ERP systems. The ideal candidate will play a key role in migrating legacy SSIS packages, integrating complex enterprise data sources (including Oracle Fusion and Microsoft CRM), and preparing data pipelines for Power BI dashboards and AI-driven analytics . Key Responsibilities: Migrate and rebuild existing SSIS packages into modern Azure Data Factory pipelines Design, develop, and optimize end-to-end ETL solutions using ADF Integrate and extract data from Oracle Fusion ERP , Oracle EBS , and Microsoft CRM Create and manage reusable components such as pipelines, datasets, linked services, triggers Collaborate with business analysts and Power BI developers to ensure clean and accurate data flow Perform complex SQL scripting and transformation logic Monitor, troubleshoot, and tune ETL performance Maintain proper documentation of data sources, flows, and mappings Must-Have Skills: 7+ years of hands-on experience in ETL development 4+ years with Azure Data Factory (ADF) : pipelines, dataflows, triggers, integration runtimes Solid understanding of SSIS and experience in migration to ADF Deep knowledge of Oracle Fusion ERP data models, especially Finance, SCM, and HCM modules Experience with FBDI , HDL , OTBI , and BI Publisher reporting Strong SQL and PL/SQL development skills Familiarity with Azure SQL Database , Data Lake , Blob Storage Knowledge of how ADF pipelines feed Power BI datasets Experience working with CI/CD pipelines (preferably Azure DevOps) Nice to Have: Microsoft Certified: Azure Data Engineer Associate or equivalent Exposure to OIC (Oracle Integration Cloud) or similar iPaaS tools Experience with REST/SOAP APIs , JSON/XML , and Microsoft Dynamics CRM Prior experience supporting AI/ML analytics pipelines

Posted 4 weeks ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job description Job Name: Senior Data Engineer DBT & Snowflake Years of Experience: 5 Job Description: We are looking for a skilled and experienced DBT-Snowflake Developer to join our team! As part of the team, you will be involved in the implementation of the ongoing and new initiatives for our company. If you love learning, thinking strategically, innovating, and helping others, this job is for you! Primary Skills: DBT,Snowflake Secondary Skills: ADF,Databricks,Python,Airflow,Fivetran,Glue Role Description: Data engineering role requires creating and managing technological infrastructure of a data platform, be in-charge / involved in architecting, building, and managing data flows / pipelines and construct data storages (noSQL, SQL), tools to work with big data (Hadoop, Kafka), and integration tools to connect sources or other databases. Role Responsibility: Translate functional specifications and change requests into technical specifications Translate business requirement document, functional specification, and technical specification to related coding Develop efficient code with unit testing and code documentation Ensuring accuracy and integrity of data and applications through analysis, coding, documenting, testing, and problem solving Setting up the development environment and configuration of the development tools Communicate with all the project stakeholders on the project status Manage, monitor, and ensure the security and privacy of data to satisfy business needs Contribute to the automation of modules, wherever required To be proficient in written, verbal and presentation communication (English) Co-ordinating with the UAT team Role Requirement: Proficient in basic and advanced SQL programming concepts (Procedures, Analytical functions etc.) Good Knowledge and Understanding of Data warehouse concepts (Dimensional Modeling, change data capture, slowly changing dimensions etc.) Knowledgeable in Shell / PowerShell scripting Knowledgeable in relational databases, nonrelational databases, data streams, and file stores Knowledgeable in performance tuning and optimization Experience in Data Profiling and Data validation Experience in requirements gathering and documentation processes and performing unit testing Understanding and Implementing QA and various testing process in the project Knowledge in any BI tools will be an added advantage Sound aptitude, outstanding logical reasoning, and analytical skills Willingness to learn and take initiatives Ability to adapt to fast-paced Agile environment Additional Requirement: • Design, develop, and maintain scalable data models and transformations using DBT in conjunction with Snowflake, ensure the effective transformation and load data from diverse sources into data warehouse or data lake. • Implement and manage data models in DBT, guarantee accurate data transformation and alignment with business needs. • Utilize DBT to convert raw, unstructured data into structured datasets, enabling efficient analysis and reporting. • Write and optimize SQL queries within DBT to enhance data transformation processes and improve overall performance. • Establish best DBT processes to improve performance, scalability, and reliability. • Expertise in SQL and a strong understanding of Data Warehouse concepts and Modern Data Architectures. • Familiarity with cloud-based platforms (e.g., AWS, Azure, GCP). • Migrate legacy transformation code into modular DBT data models

Posted 4 weeks ago

Apply

4.0 years

0 Lacs

Kolkata metropolitan area, West Bengal, India

On-site

Role: AWS Data Engineer Experience: 4+ years Work Location: TCS Kolkata Responsibilities AWS Data engineer having experience in building data pipeline with Glue, Lambda, EMR, S3. Having experience in PySpark and Python programming. Should have PySpark, SQL, Azure Services (ADF, DataBricks, Synapse) Designing and implementing data ingestion pipelines from multiple sources using Azure Databricks. Developing scalable and re-usable frameworks for ingesting data sets Integrating the end-to-end data pipeline - to take data from source systems to target data repositories ensuring the quality and consistency of data is always maintained. Working with event based / streaming technologies to ingest and process data. Working with other members of the project team to support delivery of additional project components (API interfaces, Search) Evaluating the performance and applicability of multiple tools against customer requirements Have knowledge on deployment framework such as CI/CD, GitHub check in process Able to perform data analytics, data analysis and data profiling Good communication Qualifications 10+2+3 years of regular education is must Minimum 4+ years of relevant experience is a must Note: Candidate should be willing to join in Third party payroll Immediate to 30 days joiners are preferred

Posted 4 weeks ago

Apply

5.0 years

0 Lacs

India

Remote

Job Description As an Azure DevOps Engineer, you will be responsible for: Design, develop, and manage Infrastructure as Code (IaC) using Terraform to automate the provisioning of Azure resources. Build and maintain CI/CD pipelines leveraging Azure DevOps Pipelines and GitHub Actions to support efficient code integration, testing, and deployment. Administer and configure core Azure services including Networking, Storage, Compute, and Security. Collaborate with data engineering teams to support and integrate Azure Data services such as Azure Databricks and Azure Data Factory . Manage and deploy containerized applications using Docker , with orchestration via Kubernetes . Write and maintain automation scripts using Bash and/or Python for system management and DevOps workflows. Profile Requirements For this position of Azure DevOps Engineer, we are looking for someone with: 5+ years of experience in DevOps , with proven success in designing and implementing CI/CD workflows. Expert in Terraform for infrastructure automation. Strong hands-on experience with Azure DevOps and GitHub Actions . Solid understanding of core Azure services including Networking, Compute, Storage, and Security. Familiarity with Azure Data services such as Databricks and ADF (Azure Data Factory) . Proficient in Docker and Kubernetes . Strong working knowledge of Linux and scripting with Bash/Python . Excellent problem-solving, communication, and collaboration skills. Benefits For this position of Azure DevOps Engineer , we plan to offer you: Starting Gross Monthly Salary = Negotiable Depending on Your Skills and Experience Other Ad-Hoc Bonuses (per company internal policy) 100% Petty Cash Reimbursements 30-40 Days Paid Absence 500+ Lifelong Learning Courses (and new on Demand) Corporate Laptop 100% Flexible Working Hours on Project Demand Work & Travel Opportunities in EU and Canada Adastra APAM Culture Manifesto Servant Leadership Managers are servants to employees. Managers are elected to make sure that employees have all the processes, resources, and information they need to provide services to clients in an efficient manner. Any manager up to the CEO is visible and reachable for a chat regardless their title. Decisions are taken with a consent in an agile manner and executed efficiently in no overdue time. We accept that wrong decisions happen and we appreciate the learning before we adjust the process for a continuous improvement. Employees serve clients. Employees listen attentively to client needs and collaborate internally as a team to cater to them. Managers and employees work together to get things done and are accountable to each other. Corporate KPIs are transparently reviewed on monthly company events with all employees. Performance Driven Compensation We recognize and accept that some of us are more ambitious, more gifted, or more hard-working. We also recognize that some of us look for a stable income and lesser hassle at a different stage of their careers. There is a place for everyone, we embrace and need this diversity. Grades in our company are not based on number of years of experience, they are value driven based on everyone’s ability to deliver independently their work to clients and/or lead others. There is no “annual indexation” of salaries, you may be upgraded several times within the year, or none, based on your own pace of progress, ambitions, relevant skillset and recognition by clients. Work-Life Integration We challenge the notion of work-life balance, we embrace the notion of work-life integration instead. This philosophy looks into our lives a single whole where we serve ourselves, our families and our clients in an integrated manner. We encourage 100% flexible working hours where you arrange your day. This means you are free when you have little work, but this also means extra effort if you are behind schedule. Working for clients that may be in different time zones means we give you the flexibility to design how your day will look like in accordance to personal and project preferences and needs. We appreciate time and we minimize time spent on Adastra meetings. We are also a remote-first company. While we have our collaboration offices and social events, we encourage people to work 100% remote from home whenever possible. This means saving time and money on commute, staying home with elderly and little ones, not missing the special moments in life. This also means you can work from any of our other offices in Europe, North America or Australia, or move to a place with lower cost of living without impacting your income. We trust you by default until you fail our trust. Global Diversity Adastra is an international organization. We hire globally and our biggest partners and clients are in Europe, North America and Australia. We work on teams with individuals from different culture, ethnicity, sexual preference, political views or religion. We have zero tolerance to anyone who doesn’t pay respect to others or is abusive in any way. We speak different languages to one another, but we speak English when we are together or with clients. Our company is a safe space where communication is encouraged but boundaries regarding sensitive topics are respected. We accept and converge together to serve our teams and clients and ultimately have good time at work. Lifelong Learning On annual average we invest 25% of our working hours to personal development and upskilling outside project work, regardless of seniority or role. We feature hundreds of courses on our Training Repo, and we continue to actively purchase or tailor hands-on content. We certify people on our expense. We like to say we are technology agnostic; we learn the principles of data management and we apply it on different use cases and different technology stacks. We believe that the juniors today are the seniors tomorrow, we treat everyone with respect and mentor them into the roles they deserve. We encourage seniors to give back to the IT community through leadership and mentorship. On your last day with us we may give you an open-dated job offer so that you feel welcome to return home as others did before you. More About Adastra: Visit http://adastragrp.com and/or contact us: HRIN@adastragrp.com FRAUD ALERT: Be cautious of fake job postings and individuals posing as Adastra employees. HOW TO VERIFY IT'S US: Our employees will only use email addresses ending in @adastragrp.com . Any other domains, even if similar, are not legitimate. We will never request any form of payment, including but not limited to fees, certification costs, or deposits. Please reach out to HRIN@adastragrp.com only in case you have any questions.

Posted 4 weeks ago

Apply

0 years

5 - 9 Lacs

Bengaluru

On-site

Req ID: 330864 NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a Senior Dev Ops Engineer to join our team in Bangalore, Karnātaka (IN-KA), India (IN). "Job Duties: -DEVOps Exp in Establishing and managing CI/CD pipelines to automate the build, test, and deployment processes. Exp in Provision and manage infrastructure resources in the cloud using tools like Terraform. Exp in Azure Databricks, Azure DevOps tools, Terraform / Azure Resource Manager , Containerization and Orchestration with Docker and Kubernetes. Version control Exp - Git or Azure Repos Scripting automation - Azure CLI/Powershell Must have: Proficiency in Cloud Technologies Azure, Azure Databricks, ADF, CI/CD pipelines, terraform, Hashicorp Vault, Github, Git Preferred: Containerization and Orchestration with Docker and Kubernetes,IAM, RBAC, OAuth, Change Managment, SSL certificates Knowledge of security best practices and compliance frameworks like GDPR or HIPAA. Minimum Skills Required: -DEVOps Exp in Establishing and managing CI/CD pipelines to automate the build, test, and deployment processes. Exp in Provision and manage infrastructure resources in the cloud using tools like Terraform. Exp in Azure Databricks, Azure DevOps tools, Terraform / Azure Resource Manager , Containerization and Orchestration with Docker and Kubernetes. Version control Exp - Git or Azure Repos Scripting automation - Azure CLI/Powershell Must have: Proficiency in Cloud Technologies Azure, Azure Databricks, ADF, CI/CD pipelines, terraform, Hashicorp Vault, Github, Git Preferred: Containerization and Orchestration with Docker and Kubernetes,IAM, RBAC, OAuth, Change Managment, SSL certificates Knowledge of security best practices and compliance frameworks like GDPR or HIPAA." About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at us.nttdata.com NTT DATA endeavors to make https://us.nttdata.com accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at https://us.nttdata.com/en/contact-us. This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click here. If you'd like more information on your EEO rights under the law, please click here. For Pay Transparency information, please click here.

Posted 4 weeks ago

Apply

10.0 - 15.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Designation : Data Architect. Location : Pune. Experience : 10-15 years. Job Description Role & Responsibilities : The architect should have experience in architecting large scale analytics solutions using native services such as Azure Synapse, Data Lake, Data Factory, HDInsight, Databricks, Azure Cognitive Services, Azure ML, Azure Event Hub. Assist with creation of a robust, sustainable architecture that supports requirements and provides for expansion with secured access. Experience in building/running large data environment for BFSI clients. Work with customers, end users, technical architects, and application designers to define the data requirements and data structure for BI/Analytic solutions. Designs conceptual and logical models for the data lake, data warehouse, data mart, and semantic layer (data structure, storage, and integration). Lead the database analysis, design, and build effort. Communicates physical database designs to lead data architect/database administrator. Evolves data models to meet new and changing business requirements. Work with business analysts to identify and understand requirements and source data systems. Skills Required Big Data Technologies : Expert in big data technologies on Azure/GCP. ETL Platforms : Experience with ETL platforms like ADF, Glue, Ab Initio, Informatica, Talend, Airflow. Data Visualization : Experience in data visualization tools like Tableau, Power BI, etc. Data Engineering & Management : Experience in a data engineering, metadata management, database modeling and development role. Streaming Data Handling : Strong experience in handling streaming data with Kafka. Data API Understanding : Understanding of Data APIs, Web services. Data Security : Experience in Data security and Data Archiving/Backup, Encryption and define the standard processes for same. DataOps/MLOps : Experience in setting up DataOps and MLOps. Integration : Work with other architects to ensure that all components work together to meet objectives and performance goals as defined in the requirements. Data Science Coordination : Coordinate with the Data Science Teams to identify future data needs and requirements and creating pipelines for them. Soft Skills Soft skills such as communication, leading the team, taking ownership and accountability to successful engagement. Participate in quality management reviews. Managing customer expectation and business user interactions. Deliver key research (MVP, POC) with an efficient turn-around time to help make strong product decisions. Demonstrate key understanding and expertise on modern technologies, architecture, and design. Mentor the team to deliver modular, scalable, and high-performance code. Innovation : Be a change agent on key innovation and research to keep the product, team at the cutting edge of technical and product innovation. (ref:hirist.tech)

Posted 4 weeks ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Primary skills: Technology->AWS->Devops Technology->Cloud Integration->Azure Data Factory (ADF),Technology->Cloud Platform->AWS Database, Technology->Cloud Platform->Azure Devops->Azure Pipelines, Technology->DevOps->Continuous integration - Mainframe A day in the life of an Infoscion As part of the Infosys consulting team, your primary role would be to actively aid the consulting team in different phases of the project including problem definition, effort estimation, diagnosis, solution generation and design and deployment You will explore the alternatives to the recommended solutions based on research that includes literature surveys, information available in public domains, vendor evaluation information, etc. and build POCs You will create requirement specifications from the business needs, define the to-be-processes and detailed functional designs based on requirements. You will support configuring solution requirements on the products; understand if any issues, diagnose the root-cause of such issues, seek clarifications, and then identify and shortlist solution alternatives You will also contribute to unit-level and organizational initiatives with an objective of providing high quality value adding solutions to customers. If you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! Ability to work with clients to identify business challenges and contribute to client deliverables by refining, analyzing, and structuring relevant data Awareness of latest technologies and trends Logical thinking and problem solving skills along with an ability to collaborate Ability to assess the current processes, identify improvement areas and suggest the technology solutions One or two industry domain knowledge

Posted 4 weeks ago

Apply

10.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Description Customer Success Services (CSS) Are you passionate about problem solving? If you are enthusiastic to learn cutting edge technologies, you have interest in innovation and you are customer-centric- we want you with us! Oracle is a technology leader that’s changing how the world does business – and our Customer Success Services (CSS) team supports over 6,000 companies around the world. We’re looking for an experienced and self-motivated Sr. / Sr. Principal Support Engineer - EBS Apps Developer. Join the team of highly skilled technical experts who build and maintain our clients’ technical landscapes through tailored support services. The EBS Oracle Applications developer is an experienced technical professional, who has an understanding of business solutions, industry best practices, multiple business processes and technology designs within the Oracle Applications supporting products and technologies. The candidate should have experience in implementation or support of large to medium Oracle Applications implementation projects. He or She should be able to operate independently to provide quality work products, and perform varied and complex duties and tasks that need independent judgment. Your Opportunity We are looking for flexible and open-minded experts, able to work with different technologies, and address complex architectures, on premises, cloud, or Hybrid environments. We look for engineers who can quickly learn and who are willing to work with new and innovative products and solutions, and who are capable to interact and collaborate with people in different teams globally to provide always the best-tailored solution to Oracle customers. CSS offers a professional context where engineers can develop themselves constantly and where they can always be in touch with the most innovative technologies both in on-prem and in cloud environments. SKILLS: Strong technical knowledge in Oracle applications, SQL and PL-SQL is a must. Strong knowledge in OAF, XML, Oracle Forms and Reports, AME, WF, APEX is a must. Java, ADF, JET and PaaS skills. Oracle relevant technical Certification. Good understanding of functional parts of the developed code (Preferably in Oracle Financials and HRMS). Strong analytical and problem solving skills Technical troubleshooting experience. Our Ideal Candidate In addition to the technical capabilities, our ideal candidate is a person who: The job involves working with customers in different time zones and resource should be flexible to work in shifts including night shifts Resource should be able to independently work on CEMLI objects - Design, develop and test Technically good in development and experience on EBS Financial Modules Resource should be able to investigate, analyze, design and develop solution for enhancements/developments related to CEMLI’s Resource should be able to identify the impact of patches and determine functional and technical steps required to minimize the disruption to business Report progress/status/risk/issues on development at regular basis. Resource should be able to manage the complete development pipeline and manage the scope, time and cost and delivery of all the CEMLIs Resource should be able to lead the support team in Incident and Problem Management and come up with innovative solutions in short span of time. Resource should be able to understand customer requirements/user stories and implement practical solutions. Resource should have hands on knowledge and expertise on Oracle EBS R12 and Fusion/SaaS modules Resource should have Good knowledge of business processes and application setups and the impacts of one setups to another. REQUIREMENTS: Minimum 10 years of relevant experience. Excellent problem-solving skills and troubleshooting skills. Ability to work effectively in a team, collaborating with stakeholders to solve business needs. Strong communication and teamwork skills. Self driven and result oriented Collaborate with product owners, QA teams, and stakeholders to understand requirements, work on user stories/backlog items, and ensure high-quality delivery. Ability to keep track of schedules and ensure on-time delivery of assigned tasks, optimizing pace and meeting deadlines. Participate in standup meetings and provide progress updates regularly. Experience in understanding customer requirement. Good knowledge of business processes and application setups. Good Technical expertise on EBS/integrations architecture Fluent English (other additional languages will be also valued) Availability to travel and work onsite at customers by not less than 50% Availability to work 24x7 (on-call) RESPONSIBILITIES: Work on developing technical solutions to meet business requirements gathered and documented by functional consultant Identify and resolve key issues related to code change requirements and bug fixes Support Oracle ERP products and services from the technical aspect in line with the contractual agreement Works with support to resolve Customers SRs. Conduct knowledge transfer sessions both within the Oracle team and to end users. Work closely with the functional team and delivery leaders to provide development work estimates and drive excellence in technical work. To develop and manage the technical relationship with a designated account(s) in order to maximize the value of CSS to the customer, To develop and maintain trusted relationships with the other Oracle contacts within designated account(s) and relevant third parties, To act as the technical primary point of contact for Oracle Support To safeguard customer satisfaction, and renewal, through quality delivery and added value. Engage directly in architectural tasks and collaborate with colleagues to implement best practices specific to the projects. Detect and address performance challenges, security issues, and other technical concerns proactively. Analyze, troubleshoot and solve whenever feasible, the issues the customer may face using Oracle products. Identify required/recommended actions on Customer systems as main output of service delivery, based on own knowledge and experience; Escalate at the right time customer issues to Technical Account Manager where relevant; Ensure adherence to internal methodology, tools and quality standards; Actively participate on Services development. Actively collaborate with other engineers in the team or in other teams, to share knowledge, experiences and others, which can benefit CSS Business results. Career Level - IC4 About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.

Posted 1 month ago

Apply

6.0 years

0 Lacs

Kolkata, West Bengal, India

Remote

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. The opportunity We are seeking a highly skilled and motivated Senior DataOps Engineer with strong expertise in the Azure data ecosystem. You will play a crucial role in managing and optimizing data workflows across Azure platforms such as Azure Data Factory, Data Lake, Databricks, and Synapse. Your primary focus will be on building, maintaining, and monitoring data pipelines, ensuring high data quality, and supporting critical data operations. You'll also support visualization, automation, and CI/CD processes to streamline data delivery and reporting. Your Key Responsibilities Data Pipeline Management: Build, monitor, and optimize data pipelines using Azure Data Factory (ADF), Databricks, and Azure Synapse for efficient data ingestion, transformation, and storage. ETL Operations: Design and maintain robust ETL processes for batch and real-time data processing across cloud and on-premise sources. Data Lake Management: Organize and manage structured and unstructured data in Azure Data Lake, ensuring performance and security best practices. Data Quality & Validation: Perform data profiling, validation, and transformation using SQL, PySpark, and Python to ensure data integrity. Monitoring & Troubleshooting: Use logging and monitoring tools to troubleshoot failures in pipelines and address data latency or quality issues. Reporting & Visualization: Work with Power BI or Tableau teams to support dashboard development, ensuring the availability of clean and reliable data. DevOps & CI/CD: Support data deployment pipelines using Azure DevOps, Git, and CI/CD practices for version control and automation. Tool Integration: Collaborate with cross-functional teams to integrate Informatica CDI or similar ETL tools with Azure components for seamless data flow. Collaboration & Documentation: Partner with data analysts, engineers, and business stakeholders, while maintaining SOPs and technical documentation for operational efficiency. Skills And Attributes For Success Strong hands-on experience in Azure Data Factory, Azure Data Lake, Azure Synapse, and Databricks Solid understanding of ETL/ELT design and implementation principles Strong SQL and PySpark skills for data transformation and validation Exposure to Python for automation and scripting Familiarity with DevOps concepts, CI/CD workflows, and source control systems (Azure DevOps preferred) Experience in working with Power BI or Tableau for data visualization and reporting support Strong problem-solving skills, attention to detail, and commitment to data quality Excellent communication and documentation skills to interface with technical and business teamsStrong knowledge of asset management business operations, especially in data domains like securities, holdings, benchmarks, and pricing. To qualify for the role, you must have 4–6 years of experience in DataOps or Data Engineering roles Proven expertise in managing and troubleshooting data workflows within the Azure ecosystem Experience working with Informatica CDI or similar data integration tools Scripting and automation experience in Python/PySpark Ability to support data pipelines in a rotational on-call or production support environment Comfortable working in a remote/hybrid and cross-functional team setup Technologies and Tools Must haves Azure Databricks: Experience in data transformation and processing using notebooks and Spark. Azure Data Lake: Experience working with hierarchical data storage in Data Lake. Azure Synapse: Familiarity with distributed data querying and data warehousing. Azure Data factory: Hands-on experience in orchestrating and monitoring data pipelines. ETL Process Understanding: Knowledge of data extraction, transformation, and loading workflows, including data cleansing, mapping, and integration techniques. Good to have Power BI or Tableau for reporting support Monitoring/logging using Azure Monitor or Log Analytics Azure DevOps and Git for CI/CD and version control Python and/or PySpark for scripting and data handling Informatica Cloud Data Integration (CDI) or similar ETL tools Shell scripting or command-line data SQL (across distributed and relational databases) What We Look For Enthusiastic learners with a passion for data op’s and practices. Problem solvers with a proactive approach to troubleshooting and optimization. Team players who can collaborate effectively in a remote or hybrid work environment. Detail-oriented professionals with strong documentation skills. What We Offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.

Posted 1 month ago

Apply

0 years

6 - 8 Lacs

Bengaluru

On-site

 Manage and Maintain the RBI ADF/reporting system to ensure the timely and accurate submission of regulatory returns as and when required.  Act as Money Laundering Reporting Officer and perform all duties and responsibilities to ensure adherence with RBI rules and regulatory bodies.  Liaise with the RBI / FIU and other regulatory bodies as required to ensure compliance with RBI rules and regulations and other requirements of a legal nature.  Provide managers of the other teams with appropriate and up-to-date information or data immediately as requested.  Authorize and release payment order filtered by OFAC Filtering System.  Make a return of Bank Audit posted by Audit Company.  Work closely with the Chief Executive Officer in overseeing compliance procedures and advise on risk management.  Assist the Chief Executive Officer with the development of the entity-wide budget for compliance efforts including identifying resource gaps and direct resources appropriately whether within the department or in other areas of the Bank.  Create process and manuals according to KEB Hana Bank policy and periodically to be reviewed.  Manage daily & monthly Audit.  Manage audit set up by H.O.  Act in the capacity of Internal Auditor ensuring that regular audits are performed of all departments of the branch.  Train all staffs for internal control & AML and report to H.O.  Establish and execute yearly Compliance Plan and report results to H.O.  Monitor internal control process and submit Monthly Compliance Report to H.O.  Preview and assess new / renewal of contracts, proposals of launching new banking products / services and submissions of bank’s internal data to external parties.  Manage and Maintain intimate relationship with regulators for cooperations. Job Type: Full-time Pay: ₹650,000.00 - ₹800,000.00 per year Schedule: Day shift Work Location: In person

Posted 1 month ago

Apply

10.0 years

26 - 30 Lacs

Chennai

On-site

We are looking for Associate Division Manager for one of our Major Client .This role includes designing and building AI/ML products at scale to improve customer Understanding & Sentiment analysis, recommend customer requirements, recommend optimal inputs, Improve efficiency of Process. This role will collaborate with product owners and business owners Key Responsibilities: - Leading a team of junior and experienced data scientists Lead and participate in end-to-end ML projects deployments that require feasibility analysis, design, development, validation, and application of state-of-the art data science solutions. Push the state of the art in terms of the application of data mining, visualization, predictive modelling, statistics, trend analysis, and other data analysis techniques to solve complex business problems including lead classification, recommender systems, product life-cycle modelling, Design Optimization problems, Product cost & weigh optimization problems.Functional Responsibilities :- Leverage and enhance applications utilizing NLP, LLM, OCR, image based models and Deep Learning Neural networks for use cases including text mining, speech and object recognition Identify future development needs, advance new emerging ML and AI technology, and set the strategy for the data science team Cultivate a product-centric, results-driven data science organization Write production ready code and deploy real time ML models; expose ML outputs through APIs Partner with data/ML engineers and vendor partners for input data pipes development and ML models automation Provide leadership to establish world-class ML lifecycle management processes.Qualification :- MTech / BE / BTech / MSc in CS Exp:- Over 10 years of Applied Machine learning experience in the fields of Machine Learning, Statistical Modelling, Predictive Modelling, Text Mining, Natural Language Processing (NLP), LLM, OCR, Image based models, Deep learning Expert Python Programmer: SQL, C#, extremely proficient with the SciPy stack (e.g. numpy, pandas, sci-kit learn, matplotlib) Proficiency in work with open source deep learning platforms like TensorFlow, Keras, Pytorch Knowledge of the Big Data Ecosystem: (Apache Spark, Hadoop, Hive, EMR, MapReduce) Proficient in Cloud Technologies and Service (Azure Databricks, ADF, Databricks MLflow).Functional Competencies :- A demonstrated ability to mentor junior data scientists and proven experience in collaborative work environments with external customers Proficient in communicating technical findings to non-technical stakeholders Holding routine peer code review of ML work done by the team Experience in leading and / or collaborating with small to midsized teams Experienced in building scalable / highly available distribute systems in production Experienced in ML lifecycle mgmt. and ML Ops tools & frameworks.Job type:- FTE Location:- Chennai Job Type: Contractual / Temporary Pay: ₹2,633,123.63 - ₹3,063,602.96 per year Schedule: Monday to Friday Education: Bachelor's (Preferred) Work Location: In person

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies