Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
7.0 years
0 Lacs
Gurgaon
On-site
Job Purpose Client calls, guide towards optimized, cloud-native architectures, future state of their data platform, strategic recommendations and Microsoft Fabric integration. Desired Skills and experience Candidates should have a B.E./B.Tech/MCA/MBA in Finance, Information Systems, Computer Science or a related field 7+ years of experience as a Data and Cloud architecture with client stakeholders AZ Data Platform Expertise: Synapse, Databricks, Azure Data Factory (ADF), Azure SQL (DW/DB), Power BI (PBI). Define modernization roadmaps and target architecture. Strong understanding of data governance best practices for data quality, Cataloguing, and lineage. Proven ability to lead client engagements and present complex findings. Excellent communication skills, both written and verbal Extremely strong organizational and analytical skills with strong attention to detail Strong track record of excellent results delivered to internal and external clients Able to work independently without the needs for close supervision and collaboratively as part of cross-team efforts Experience with delivering projects within an agile environment Experience in project management and team management Key responsibilities include: Lead all interviews & workshops to capture current/future needs. Direct the technical review of Azure (AZ) infrastructure (Databricks, Synapse Analytics, Power BI) and critical on-premises (on-prem) systems. Come up with architecture designs (Arch. Designs), focusing on refined processing strategies and Microsoft Fabric. Understand and refine the Data Governance (Data Gov.) roadmap, including data cataloguing (Data Cat.), lineage, and quality. Lead project deliverables, ensuring actionable and strategic outputs. Evaluate and ensure quality of deliverables within project timelines Develop a strong understanding of equity market domain knowledge Collaborate with domain experts and business stakeholders to understand business rules/logics Ensure effective, efficient, and continuous communication (written and verbally) with global stakeholders Independently troubleshoot difficult and complex issues on dev, test, UAT and production environments Responsible for end-to-end delivery of projects, coordination between client and internal offshore teams and manage client queries Demonstrate high attention to detail, should work in a dynamic environment whilst maintaining high quality standards, a natural aptitude to develop good internal working relationships and a flexible work ethic Responsible for Quality Checks and adhering to the agreed Service Level Agreement (SLA) / Turn Around Time (TAT)
Posted 2 days ago
8.0 years
0 Lacs
Pune
On-site
Company Description About Hitachi Solutions India Pvt Ltd: Hitachi Solutions, Ltd., headquartered in Tokyo, Japan, is a core member of Information & Telecommunication Systems Company of Hitachi Group and a recognized leader in delivering proven business and IT strategies and solutions to companies across many industries. The company provides value-driven services throughout the IT life cycle from systems planning to systems integration, operation and maintenance. Hitachi Solutions delivers products and services of superior value to customers worldwide through key subsidiaries in the United States, Europe, China and India. The flagship company in the Hitachi Group's information and communication system solutions business, Hitachi Solutions also offers solutions for social innovation such as smart cities. Our Competitive Edge We work together in a dynamic and rewarding work environment. We have an experienced leadership team, excellent technology and product expertise, and strong relationships with a broad base of customers and partners. We offer competitive compensation and benefits package, regular performance review, performance bonuses, and regular trainings. What is it like working here? We pride ourselves on being industry leaders and providing an enjoyable work environment where our people can grow personally and professionally. Hitachi is the place people can develop skills they’re excited about. The following are our commitments to employees. We recognize our profitability and project success comes from our team—great people doing great things. As such, we pursue profitable growth and expanded opportunities for our team. We offer challenging and diverse work across multiple industries and reward creativity and entrepreneurial innovation. We respect, encourage, and support each individual needs to continually learn and grow personally and professionally. We are committed to fostering our people. We listen. Every employee has something important to say that can contribute to enriching our environment. We compensate fairly. And while employees might come for the paycheck, they stay for the people. Our people are the reason we are exceptional. This is something we never forget. Job Description Power BI Architects are experts in data modeling and analysis and are responsible for developing high-quality datasets and visually stunning reports. They design and develop data models that effectively support business requirements, ensuring the accuracy and reliability of the data presented in the dashboards and reports. They possess proficiency in Power BI Desktop and expertise with SQL and DAX. Projects may range from short-term individual client engagements to multiyear delivery engagements with large, blended teams. Requirements: A minimum of 8 years full-time experience using Power BI Desktop, with extensive knowledge of Power Query, Power Pivot, and Power View Able to quickly write SQL for database querying and DAX for creating custom calculations Possess good knowledge of M and Vertipaq Analyzer Understand data modeling concepts and be able to create effective data models to support reporting needs. Perform data ETL processes to ensure that data sets are clean, accurate, and ready for analysis. Work closely with stakeholders to understand requirements, deliver solutions that meet those needs, and bridge the gap between technical and non-technical sides. Unwavering ability to quickly propose solutions by recalling the latest best practices learned from MVP & Product Team articles, MSFT documentation, whitepapers, and community publications Excellent communication, presentation, influencing, and reasoning skills Familiarity with the Azure data platform, e.g., ADLS, SQL Server, ADF, Databricks etc. We would like to see a blend of the following technical skills: Power BI Desktop, Power BI Dataflows, Tabular Editor, DAX Studio, and VertiPaq Analyzer T-SQL, DAX, M, and PowerShell Power BI Service architecture design and administration Understanding the business requirements, designing the data model, and developing visualizations that provide actionable insights VertiPaq and MashUp engine knowledge Data modeling using the Kimball methodology Qualifications Good verbal and written communication. Educational Qualification: BE/MCA/ Any Graduation. Additional Information Beware of scams Our recruiting team may communicate with candidates via our @hitachisolutions.com domain email address and/or via our SmartRecruiters (Applicant Tracking System) notification@smartrecruiters.com domain email address regarding your application and interview requests. All offers will originate from our @hitachisolutions.com domain email address. If you receive an offer or information from someone purporting to be an employee of Hitachi Solutions from any other domain, it may not be legitimate.
Posted 2 days ago
3.0 years
7 - 9 Lacs
Noida
On-site
We are looking for passionate engineers who designs, develops, codes and customizes software applications from product conception to end user interface. The person should be able to Analyz and understand customer requirements and preferences, & incorporate these into the design and development process. About You – experience, education, skills, and accomplishments Bachelors’ degree or higher in related field, such as Computer Engineering or Computer Science, plus at least 3 years of software development experience, or equivalent combination of education and experience. At least 3 years’ experience working with E-Business Suite; specifically, with financials, order management, service contracts, inventory, Accounts Receivables and Advanced pricing modules. At least 3 yrs experience performance tuning in E-Business Suite. Experience developing custom components using OAF and ADF workflow, developing solutions using Oracle Apex. Experience integrating data from Oracle eBS to Sales force and working with AIM and formulating strategies for implementation. Expert knowledge of Oracle Applications interfaces, tables, and APIs. Expertise in RICE (developed new Reports, Interface, Customization, Extensions, and form personalization). It would be great, if you also have … Experience in web technologies like HTML, JavaScript, CSS, JQuery Proficiency in Java with ability to write clean, efficient and maintainable code in Java Experience in designing, developing and maintaining Java applications Sound knowledge of Object-Oriented Programming (OOP) concepts (Optionally) Experience in AngularJS and Angular What will you be doing in this role? Write clean, efficient, and maintainable code in accordance with coding standards. Review other code to ensure clean, efficient, and maintainable code. Defines architecture of software solution. Suggests alternative methodologies or techniques to achieving desired results. Develops and maintains understanding of software development lifecycle and delivery methodology. Reviews and revises new procedures as needed for the continuing development of high-quality systems. Maintains knowledge of technical advances and evaluates new hardware / software for company use. Follows departmental policies, procedures, and work instructions. Works closely with higher-level engineers to increase functional knowledge. Automate tests and unit tests all assigned applications. Participates as a team member on various engineering projects. Writes application technical documentation. About the team: The position is for Finance team within the Enterprise Services organization, a dynamic and collaborative group focused on supporting the company’s key finance applications, including order to cash functions, invoice delivery, cash collections, service contracts, third-party integrations, and the general ledger. This team ensures seamless and efficient financial processes, maintaining healthy cash flow and accurate financial reporting. The team is committed to continuous improvement, leveraging the latest technologies and best practices. Join a team that values collaboration, innovation, and excellence in supporting the company's financial operations and strategic goals. At Clarivate, we are committed to providing equal employment opportunities for all qualified persons with respect to hiring, compensation, promotion, training, and other terms, conditions, and privileges of employment. We comply with applicable laws and regulations governing non-discrimination in all locations.
Posted 2 days ago
3.0 - 5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
As a Data Engineer , you are required to: Design, build, and maintain data pipelines that efficiently process and transport data from various sources to storage systems or processing environments while ensuring data integrity, consistency, and accuracy across the entire data pipeline. Integrate data from different systems, often involving data cleaning, transformation (ETL), and validation. Design the structure of databases and data storage systems, including the design of schemas, tables, and relationships between datasets to enable efficient querying. Work closely with data scientists, analysts, and other stakeholders to understand their data needs and ensure that the data is structured in a way that makes it accessible and usable. Stay up-to-date with the latest trends and technologies in the data engineering space, such as new data storage solutions, processing frameworks, and cloud technologies. Evaluate and implement new tools to improve data engineering processes. Qualification : Bachelor's or Master's in Computer Science & Engineering, or equivalent. Professional Degree in Data Science, Engineering is desirable. Experience level : At least 3 - 5 years hands-on experience in Data Engineering, ETL. Desired Knowledge & Experience : Spark: Spark 3.x, RDD/DataFrames/SQL, Batch/Structured Streaming Knowing Spark internals: Catalyst/Tungsten/Photon Databricks: Workflows, SQL Warehouses/Endpoints, DLT, Pipelines, Unity, Autoloader IDE: IntelliJ/Pycharm, Git, Azure Devops, Github Copilot Test: pytest, Great Expectations CI/CD Yaml Azure Pipelines, Continuous Delivery, Acceptance Testing Big Data Design: Lakehouse/Medallion Architecture, Parquet/Delta, Partitioning, Distribution, Data Skew, Compaction Languages: Python/Functional Programming (FP) SQL: TSQL/Spark SQL/HiveQL Storage: Data Lake and Big Data Storage Design additionally it is helpful to know basics of: Data Pipelines: ADF/Synapse Pipelines/Oozie/Airflow Languages: Scala, Java NoSQL: Cosmos, Mongo, Cassandra Cubes: SSAS (ROLAP, HOLAP, MOLAP), AAS, Tabular Model SQL Server: TSQL, Stored Procedures Hadoop: HDInsight/MapReduce/HDFS/YARN/Oozie/Hive/HBase/Ambari/Ranger/Atlas/Kafka Data Catalog: Azure Purview, Apache Atlas, Informatica Required Soft skills & Other Capabilities : Great attention to detail and good analytical abilities. Good planning and organizational skills Collaborative approach to sharing ideas and finding solutions Ability to work independently and also in a global team environment. Show more Show less
Posted 2 days ago
0 years
0 Lacs
Kolkata, West Bengal, India
On-site
1.Experience in full lifecycle software projects – including client/server and web Applications, with responsibilities ranging from system analysis, design, development, unit testing, and documentation. 2.Other Technical skills like SOA, ADF and UNIX shell scripting. 3.Extensive experience in writing Packages, stored functions, stored procedures, triggers and very strong in PL/SQL. 4.Extensively worked on Oracle APIs, Forms6i/10g and Reports6i/10g with Oracle Database10g/11g, Discoverer, XML/BI Publisher, Workflows and Web ADI. 5.Having good exposure in Oracle AIM Methodology means the preparation of documents like MD050, MD070, CV040, CV060, TE020 and MD120. 6.Ability to learn Domain Knowledge related to the application in short period of time. Show more Show less
Posted 2 days ago
5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About Company : Our Client is a leading Indian multinational IT services and consulting firm. It provides digital transformation, cloud computing, data analytics, enterprise application integration, infrastructure management, and application development services. The company caters to over 700 clients across industries such as banking and financial services, manufacturing, technology, media, retail, and travel & hospitality. Its industry-specific solutions are designed to address complex business challenges by combining domain expertise with deep technical capabilities. With a global workforce of over 80,000 professionals and a presence in more than 50 countries. Job Title: Python Developer Locations: PAN INDIA Experience: 5+ Years (Relevant) Employment Type: Contract to Hire Work Mode : Work From Office Notice Period : Immediate to 15 Days Job Description: We are seeking an experienced Python Developer with a strong background in Azure cloud application development and data integration . The role involves building and maintaining cloud-based solutions, developing data pipelines, and transforming data from multiple sources to support business intelligence and analytics. Responsibilities: Design, implement, and manage cloud-based applications using Azure services Host websites and Python-based APIs on Azure Web Apps Develop Infrastructure as Code (IaC) for automating resource provisioning and deployment Write and maintain automation and deployment scripts Collect, aggregate, and manage data from various sources including: APIs S3 buckets Excel/CSV files Azure Blob Storage SharePoint Flatten and transform JSON data and model it for downstream processes Perform data transformations to ensure quality and consistency Build and maintain data pipelines and ETL processes Design and implement data flows using Azure Data Factory (ADF) Write and optimize SQL queries and Python scripts for data manipulation Work with Snowflake and other warehousing tools to manage data models and schemas Monitor and optimize performance of data workflows Document processes, workflows, and best practices Collaborate with cross-functional teams to define and meet data requirements Skillset Required: Strong proficiency in Python and Azure applications management Expertise in Azure Data Factory (ADF) Experience with Snowflake data warehousing Solid understanding of data integration and transformation Strong SQL skills and scripting experience Experience with various data formats and sources (JSON, CSV, Excel, etc.) Show more Show less
Posted 2 days ago
3.0 years
0 Lacs
India
Remote
Title: Data Engineer Location: Remote Employment type: Full Time with BayOne We’re looking for a skilled and motivated Data Engineer to join our growing team and help us build scalable data pipelines, optimize data platforms, and enable real-time analytics. What You'll Do Design, develop, and maintain robust data pipelines using tools like Databricks, PySpark, SQL, Fabric, and Azure Data Factory Collaborate with data scientists, analysts, and business teams to ensure data is accessible, clean, and actionable Work on modern data lakehouse architectures and contribute to data governance and quality frameworks Tech Stack Azure | Databricks | PySpark | SQL What We’re Looking For 3+ years experience in data engineering or analytics engineering Hands-on with cloud data platforms and large-scale data processing Strong problem-solving mindset and a passion for clean, efficient data design Job Description: Min 3 years of experience in modern data engineering/data warehousing/data lakes technologies on cloud platforms like Azure, AWS, GCP, Data Bricks etc. Azure experience is preferred over other cloud platforms. 5 years of proven experience with SQL, schema design and dimensional data modelling Solid knowledge of data warehouse best practices, development standards and methodologies Experience with ETL/ELT tools like ADF, Informatica, Talend etc., and data warehousing technologies like Azure Synapse, Microsoft Fabric, Azure SQL, Amazon redshift, Snowflake, Google Big Query etc. Strong experience with big data tools (Databricks, Spark etc..) and programming skills in PySpark and Spark SQL. Be an independent self-learner with “let’s get this done” approach and ability to work in Fast paced and Dynamic environment. Excellent communication and teamwork abilities. Nice-to-Have Skills: Event Hub, IOT Hub, Azure Stream Analytics, Azure Analysis Service, Cosmo DB knowledge. SAP ECC /S/4 and Hana knowledge. Intermediate knowledge on Power BI Azure DevOps and CI/CD deployments, Cloud migration methodologies and processes BayOne is an Equal Opportunity Employer and does not discriminate against any employee or applicant for employment because of race, color, sex, age, religion, sexual orientation, gender identity, status as a veteran, and basis of disability or any federal, state, or local protected class. This job posting represents the general duties and requirements necessary to perform this position and is not an exhaustive statement of all responsibilities, duties, and skills required. Management reserves the right to revise or alter this job description. Show more Show less
Posted 3 days ago
5.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
About Veersa - Veersa Technologies is a US-based IT services and AI enablement company founded in 2020, with a global delivery center in Noida (Sector 142). Founded by industry leaders with an impressive 85% YoY growth A profitable company since inception Team strength: Almost 400 professionals and growing rapidly Our Services Include: Digital & Software Solutions: Product Development, Legacy Modernization, Support Data Engineering & AI Analytics: Predictive Analytics, AI/ML Use Cases, Data Visualization Tools & Accelerators: AI/ML-embedded tools that integrate with client systems Tech Portfolio Assessment: TCO analysis, modernization roadmaps, etc. Tech Stack - * AI/ML, IoT, Blockchain, MEAN/MERN stack, Python, GoLang, RoR, Java Spring Boot, Node.js Databases: PostgreSQL, MySQL, MS SQL, Oracle Cloud: AWS & Azure (Serverless Architecture) Website: https://veersatech.com LinkedIn: Feel free to explore our company profile Job Description * Job Title: ETL Ingestion Engineer (Azure Data Factory) Department: Data Engineering / Analytics Employment Type: Full-time Experience Level: 2–5 years About The Role We are looking for a talented ETL Ingestion Engineer with hands-on experience in Azure Data Factory (ADF) to join our Data Engineering team. An individual will be responsible for building, orchestrating, and maintaining robust data ingestion pipelines from various source systems into our data lake or data warehouse environments. Key Responsibilities Design and implement scalable data ingestion pipelines using Azure Data Factory (ADF). Extract data from a variety of sources such as SQL Server, flat files, APIs, and cloud storage. Develop ADF pipelines and data flows to support both batch and incremental loads. Ensure data quality, consistency, and reliability throughout the ETL process. Optimize ADF pipelines for performance, cost, and scalability. Monitor pipeline execution, troubleshoot failures, and ensure data availability meets SLAs. Document pipeline logic, source-target mappings, and operational procedures. Required Qualifications Bachelor’s degree in Computer Science, Engineering, or a related field. 2+ years of experience in ETL development and data pipeline implementation. Strong hands-on experience with Azure Data Factory (ADF), including linked services, datasets, pipelines, and triggers. Proficiency in SQL and working with structured and semi-structured data (CSV, JSON, Parquet). Experience with Azure storage systems (ADLS Gen2, Blob Storage) and data movement. Familiarity with job monitoring and logging mechanisms in Azure. Preferred Skills Experience with Azure Data Lake, Synapse Analytics, or Databricks. Exposure to Azure DevOps for CI/CD in data pipelines. Understanding of data governance, lineage, and compliance requirements (GDPR, HIPAA, etc.). Knowledge of RESTful APIs and API-based ingestion. Job Title: ETL Lead – Azure Data Factory (ADF) Department: Data Engineering / Analytics Employment Type: Full-time Experience Level: 5+ years About The Role We are seeking an experienced ETL Lead with strong expertise in Azure Data Factory (ADF) to lead and oversee data ingestion and transformation projects across the organization. The role demands a mix of technical proficiency and leadership to design scalable data pipelines, manage a team of engineers, and collaborate with cross-functional stakeholders to ensure reliable data delivery. Key Responsibilities Lead the design, development, and implementation of end-to-end ETL pipelines using Azure Data Factory (ADF). Architect scalable ingestion solutions from multiple structured and unstructured sources (e.g., SQL Server, APIs, flat files, cloud storage). Define best practices for ADF pipeline orchestration, performance tuning, and cost optimization. Mentor, guide, and manage a team of ETL engineers—ensuring high-quality deliverables and adherence to project timelines. Work closely with business analysts, data modelers, and source system owners to understand and translate data requirements. Establish data quality checks, monitoring frameworks, and alerting mechanisms. Drive code reviews, CI/CD integration (using Azure DevOps), and documentation standards. Own delivery accountability across multiple ingestion and data integration workstreams. Required Qualifications Bachelor's or Master's degree in Computer Science, Data Engineering, or related discipline. 5+ years of hands-on ETL development experience, with 3+ years of experience in Azure Data Factory. Deep understanding of data ingestion, transformation, and warehousing best practices. Strong SQL skills and experience with cloud-native data storage (ADLS Gen2, Blob Storage). Proficiency in orchestrating complex data flows, parameterized pipelines, and incremental data loads. Experience in handling large-scale data migration or modernization projects. Preferred Skills Familiarity with modern data platforms like Azure Synapse, Snowflake, Databricks. Exposure to Azure DevOps pipelines for CI/CD of ADF pipelines and linked services. Understanding of data governance, security (RBAC), and compliance requirements. Experience leading Agile teams and sprint-based delivery models. Excellent communication, leadership, and stakeholder management skills. Show more Show less
Posted 3 days ago
5.0 years
0 Lacs
Delhi, India
On-site
About Veersa - Veersa Technologies is a US-based IT services and AI enablement company founded in 2020, with a global delivery center in Noida (Sector 142). Founded by industry leaders with an impressive 85% YoY growth A profitable company since inception Team strength: Almost 400 professionals and growing rapidly Our Services Include: Digital & Software Solutions: Product Development, Legacy Modernization, Support Data Engineering & AI Analytics: Predictive Analytics, AI/ML Use Cases, Data Visualization Tools & Accelerators: AI/ML-embedded tools that integrate with client systems Tech Portfolio Assessment: TCO analysis, modernization roadmaps, etc. Tech Stack - * AI/ML, IoT, Blockchain, MEAN/MERN stack, Python, GoLang, RoR, Java Spring Boot, Node.js Databases: PostgreSQL, MySQL, MS SQL, Oracle Cloud: AWS & Azure (Serverless Architecture) Website: https://veersatech.com LinkedIn: Feel free to explore our company profile Job Description * Job Title: ETL Ingestion Engineer (Azure Data Factory) Department: Data Engineering / Analytics Employment Type: Full-time Experience Level: 2–5 years About The Role We are looking for a talented ETL Ingestion Engineer with hands-on experience in Azure Data Factory (ADF) to join our Data Engineering team. An individual will be responsible for building, orchestrating, and maintaining robust data ingestion pipelines from various source systems into our data lake or data warehouse environments. Key Responsibilities Design and implement scalable data ingestion pipelines using Azure Data Factory (ADF). Extract data from a variety of sources such as SQL Server, flat files, APIs, and cloud storage. Develop ADF pipelines and data flows to support both batch and incremental loads. Ensure data quality, consistency, and reliability throughout the ETL process. Optimize ADF pipelines for performance, cost, and scalability. Monitor pipeline execution, troubleshoot failures, and ensure data availability meets SLAs. Document pipeline logic, source-target mappings, and operational procedures. Required Qualifications Bachelor’s degree in Computer Science, Engineering, or a related field. 2+ years of experience in ETL development and data pipeline implementation. Strong hands-on experience with Azure Data Factory (ADF), including linked services, datasets, pipelines, and triggers. Proficiency in SQL and working with structured and semi-structured data (CSV, JSON, Parquet). Experience with Azure storage systems (ADLS Gen2, Blob Storage) and data movement. Familiarity with job monitoring and logging mechanisms in Azure. Preferred Skills Experience with Azure Data Lake, Synapse Analytics, or Databricks. Exposure to Azure DevOps for CI/CD in data pipelines. Understanding of data governance, lineage, and compliance requirements (GDPR, HIPAA, etc.). Knowledge of RESTful APIs and API-based ingestion. Job Title: ETL Lead – Azure Data Factory (ADF) Department: Data Engineering / Analytics Employment Type: Full-time Experience Level: 5+ years About The Role We are seeking an experienced ETL Lead with strong expertise in Azure Data Factory (ADF) to lead and oversee data ingestion and transformation projects across the organization. The role demands a mix of technical proficiency and leadership to design scalable data pipelines, manage a team of engineers, and collaborate with cross-functional stakeholders to ensure reliable data delivery. Key Responsibilities Lead the design, development, and implementation of end-to-end ETL pipelines using Azure Data Factory (ADF). Architect scalable ingestion solutions from multiple structured and unstructured sources (e.g., SQL Server, APIs, flat files, cloud storage). Define best practices for ADF pipeline orchestration, performance tuning, and cost optimization. Mentor, guide, and manage a team of ETL engineers—ensuring high-quality deliverables and adherence to project timelines. Work closely with business analysts, data modelers, and source system owners to understand and translate data requirements. Establish data quality checks, monitoring frameworks, and alerting mechanisms. Drive code reviews, CI/CD integration (using Azure DevOps), and documentation standards. Own delivery accountability across multiple ingestion and data integration workstreams. Required Qualifications Bachelor's or Master's degree in Computer Science, Data Engineering, or related discipline. 5+ years of hands-on ETL development experience, with 3+ years of experience in Azure Data Factory. Deep understanding of data ingestion, transformation, and warehousing best practices. Strong SQL skills and experience with cloud-native data storage (ADLS Gen2, Blob Storage). Proficiency in orchestrating complex data flows, parameterized pipelines, and incremental data loads. Experience in handling large-scale data migration or modernization projects. Preferred Skills Familiarity with modern data platforms like Azure Synapse, Snowflake, Databricks. Exposure to Azure DevOps pipelines for CI/CD of ADF pipelines and linked services. Understanding of data governance, security (RBAC), and compliance requirements. Experience leading Agile teams and sprint-based delivery models. Excellent communication, leadership, and stakeholder management skills. Show more Show less
Posted 3 days ago
7.0 years
0 Lacs
Indore, Madhya Pradesh, India
On-site
ECI is the leading global provider of managed services, cybersecurity, and business transformation for mid-market financial services organizations across the globe. From its unmatched range of services, ECI provides stability, security and improved business performance, freeing clients from technology concerns and enabling them to focus on running their businesses. More than 1,000 customers worldwide with over $3 trillion of assets under management put their trust in ECI. At ECI, we believe success is driven by passion and purpose. Our passion for technology is only surpassed by our commitment to empowering our employees around the world . The Opportunity: ECI has an exciting Opportunity for Cloud Data Engineer. The full time position is open for an experienced Sr DataEngineer that will support several of our clients systems. Client satisfaction is our primary objective; all available positions are customer facing requiring EXCELLENT communication and people skills. A positive attitude, rigorous work habits and professionalism in the work place are a must. Fluency in English, both written and verbal are required. This is an Onsite role. What you will do: A senior cloud data engineer with 7+ years of experience Strong knowledge and hands on experience with Azure data services such as Azure Data Factory, Azure Synapse Analytics, Azure SQL Database, Azure Data Lake, Logic apps, Azure Synapse Analytics, Apache spark and Snowflake Datawarehouse, Azure Fabric Good to have Azure Databricks, Azure Cosmos DB, etc, Azure AI Must have experience in developing could base application. Should be able to analyze problem and provide solution. Experience in designing, implementing, and managing data warehouse solutions using Azure Synapse Analytics or similar technologies. Experience in migrating the data from On-Premises to Cloud. Proficiency in data modeling techniques and experience in designing and implementing complex data models. Experience in designing and developing ETL/ELT processes to move data between systems and transform data for analytics. Strong programming skills in languages such as SQL, Python, or Scala, with experience in developing and maintaining data pipelines. Experience in at least one of the reporting tools such as Power BI / Tableau Ability to work effectively in a team environment and communicate complex technical concepts to non-technical stakeholders. Experience in managing and optimizing databases, including performance tuning, troubleshooting, and capacity planning Understand business requirements and convert them to technical design for implementation. Understand business requirement, perform analysis and develop and test code. Design and develop could base application using Python on serverless framework Strong communication, analytical, and troubleshoot skills Create, maintain and enhance applications Work independently as individual contributor with minimum or no help. Follow Agile Methodology (SCRUM Who you are: Experience in developing could base data application. Hands on in Azure data services, data warehousing, ETL etc Understanding of cloud architecture principles and best practices, including scalability, high availability, disaster recovery, and cost optimization, with a focus on designing data solutions for the cloud. Experience in developing pipelines using ADF, Synapse. Hands on experience in migrating data from On_premises to cloud. Strong experience in writing the complex SQL Scripts, transformations. Able to analyze problem and provide solution. Knolwedge in CI/CD pipelines is plus. Knowledge in Python and API Gateway is an added advantage Bonus (Nice to have): Product Management/BA experience nice to have. ECI’s culture is all about connection - connection with our clients, our technology and most importantly with each other. In addition to working with an amazing team around the world, ECI also offers a competitive compensation package and so much more! If you believe you would be a great fit and are ready for your best job ever, we would like to hear from you! Love Your Job, Share Your Technology Passion, Create Your Future Here! Show more Show less
Posted 3 days ago
6.0 - 7.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Designation – Sr.Consultant Experience- 6 to 7 years Location- Bengaluru Skills Req- Python, SQL, Databrciks , ADF ,within-Databrcisk - DLT, PySpark, Structural streaming , performance and cost optimization. Roles and Responsibilities: Capture business problems, value drivers, and functional/non-functional requirements and translate into functionality. Assess the risks, feasibility, opportunities, and business impact. Assess and model processes, data flows, and technology to understand the current value and issues, and identify opportunities for improvement. Create / update clear documentation of requirements to align with the solution over the project lifecycle. Ensure traceability of requirements from business needs through testing and scope changes, to final solution. Interact with software suppliers, designers and developers to understand software limitations, deliver elements of system and database design, and ensure that business requirements and use cases are handled. Configure and document software and processes, using agreed standards and tools. Create acceptance criteria and validate that solutions meet business needs, through defining and coordinating testing. Create and present compelling business cases to justify solution value and establish approval, funding and prioritization. Initiate, plan, execute, monitor, and control Business Analysis activities on projects within agreed parameters of cost, time and quality. Lead stakeholder management activities and large design sessions. Lead teams to complete business analysis on projects. Configure and document software and processes. Define and coordinate testing. Agile project experience. Understand Agile frameworks and tools. Worked in Agile. Show more Show less
Posted 3 days ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
Gurgaon/Bangalore, India AXA XL recognizes data and information as critical business assets, both in terms of managing risk and enabling new business opportunities. This data should not only be high quality, but also actionable - enabling AXA XL’s executive leadership team to maximize benefits and facilitate sustained industrious advantage. Our Chief Data Office also known as our Innovation, Data Intelligence & Analytics team (IDA) is focused on driving innovation through optimizing how we leverage data to drive strategy and create a new business model - disrupting the insurance market. As we develop an enterprise-wide data and digital strategy that moves us toward greater focus on the use of data and data-driven insights, we are seeking an Data Engineer. The role will support the team’s efforts towards creating, enhancing, and stabilizing the Enterprise data lake through the development of the data pipelines. This role requires a person who is a team player and can work well with team members from other disciplines to deliver data in an efficient and strategic manner . What You’ll Be DOING What will your essential responsibilities include? Act as a data engineering expert and partner to Global Technology and data consumers in controlling complexity and cost of the data platform, whilst enabling performance, governance, and maintainability of the estate. Understand current and future data consumption patterns, architecture (granular level), partner with Architects to ensure optimal design of data layers. Apply best practices in Data architecture. For example, balance between materialization and virtualization, optimal level of de-normalization, caching and partitioning strategies, choice of storage and querying technology, performance tuning. Leading and hands-on execution of research into new technologies. Formulating frameworks for assessment of new technology vs business benefit, implications for data consumers. Act as a best practice expert, blueprint creator of ways of working such as testing, logging, CI/CD, observability, release, enabling rapid growth in data inventory and utilization of Data Science Platform. Design prototypes and work in a fast-paced iterative solution delivery model. Design, Develop and maintain ETL pipelines using Pyspark in Azure Databricks using delta tables. Use Harness for deployment pipeline. Monitor Performance of ETL Jobs, resolve any issue that arose and improve the performance metrics as needed. Diagnose system performance issue related to data processing and implement solution to address them. Collaborate with other teams to ensure successful integration of data pipelines into larger system architecture requirement. Maintain integrity and quality across all pipelines and environments. Understand and follow secure coding practice to make sure code is not vulnerable. You will report to Technical Lead. What You Will BRING We’re looking for someone who has these abilities and skills: Required Skills And Abilities Effective Communication skills. Bachelor’s degree in computer science, Mathematics, Statistics, Finance, related technical field, or equivalent work experience. Relevant years of extensive work experience in various data engineering & modeling techniques (relational, data warehouse, semi-structured, etc.), application development, advanced data querying skills. Relevant years of programming experience using Databricks. Relevant years of experience using Microsoft Azure suite of products (ADF, synapse and ADLS). Solid knowledge on network and firewall concepts. Solid experience writing, optimizing and analyzing SQL. Relevant years of experience with Python. Ability to break complex data requirements and architect solutions into achievable targets. Robust familiarity with Software Development Life Cycle (SDLC) processes and workflow, especially Agile. Experience using Harness. Technical lead responsible for both individual and team deliveries. Desired Skills And Abilities Worked in big data migration projects. Worked on performance tuning both at database and big data platforms. Ability to interpret complex data requirements and architect solutions. Distinctive problem-solving and analytical skills combined with robust business acumen. Excellent basics on parquet files and delta files. Effective Knowledge of Azure cloud computing platform. Familiarity with Reporting software - Power BI is a plus. Familiarity with DBT is a plus. Passion for data and experience working within a data-driven organization. You care about what you do, and what we do. Who WE are AXA XL, the P&C and specialty risk division of AXA, is known for solving complex risks. For mid-sized companies, multinationals and even some inspirational individuals we don’t just provide re/insurance, we reinvent it. How? By combining a comprehensive and efficient capital platform, data-driven insights, leading technology, and the best talent in an agile and inclusive workspace, empowered to deliver top client service across all our lines of business − property, casualty, professional, financial lines and specialty. With an innovative and flexible approach to risk solutions, we partner with those who move the world forward. Learn more at axaxl.com What we OFFER Inclusion AXA XL is committed to equal employment opportunity and will consider applicants regardless of gender, sexual orientation, age, ethnicity and origins, marital status, religion, disability, or any other protected characteristic. At AXA XL, we know that an inclusive culture and enables business growth and is critical to our success. That’s why we have made a strategic commitment to attract, develop, advance and retain the most inclusive workforce possible, and create a culture where everyone can bring their full selves to work and reach their highest potential. It’s about helping one another — and our business — to move forward and succeed. Five Business Resource Groups focused on gender, LGBTQ+, ethnicity and origins, disability and inclusion with 20 Chapters around the globe. Robust support for Flexible Working Arrangements Enhanced family-friendly leave benefits Named to the Diversity Best Practices Index Signatory to the UK Women in Finance Charter Learn more at axaxl.com/about-us/inclusion-and-diversity. AXA XL is an Equal Opportunity Employer. Total Rewards AXA XL’s Reward program is designed to take care of what matters most to you, covering the full picture of your health, wellbeing, lifestyle and financial security. It provides competitive compensation and personalized, inclusive benefits that evolve as you do. We’re committed to rewarding your contribution for the long term, so you can be your best self today and look forward to the future with confidence. Sustainability At AXA XL, Sustainability is integral to our business strategy. In an ever-changing world, AXA XL protects what matters most for our clients and communities. We know that sustainability is at the root of a more resilient future. Our 2023-26 Sustainability strategy, called “Roots of resilience”, focuses on protecting natural ecosystems, addressing climate change, and embedding sustainable practices across our operations. Our Pillars Valuing nature: How we impact nature affects how nature impacts us. Resilient ecosystems - the foundation of a sustainable planet and society - are essential to our future. We’re committed to protecting and restoring nature - from mangrove forests to the bees in our backyard - by increasing biodiversity awareness and inspiring clients and colleagues to put nature at the heart of their plans. Addressing climate change: The effects of a changing climate are far-reaching and significant. Unpredictable weather, increasing temperatures, and rising sea levels cause both social inequalities and environmental disruption. We're building a net zero strategy, developing insurance products and services, and mobilizing to advance thought leadership and investment in societal-led solutions. Integrating ESG: All companies have a role to play in building a more resilient future. Incorporating ESG considerations into our internal processes and practices builds resilience from the roots of our business. We’re training our colleagues, engaging our external partners, and evolving our sustainability governance and reporting. AXA Hearts in Action: We have established volunteering and charitable giving programs to help colleagues support causes that matter most to them, known as AXA XL’s “Hearts in Action” programs. These include our Matching Gifts program, Volunteering Leave, and our annual volunteering day - the Global Day of Giving. For more information, please see axaxl.com/sustainability. Show more Show less
Posted 3 days ago
7.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Purpose Client calls, guide towards optimized, cloud-native architectures, future state of their data platform, strategic recommendations and Microsoft Fabric integration. Desired Skills And Experience Candidates should have a B.E./B.Tech/MCA/MBA in Finance, Information Systems, Computer Science or a related field 7+ years of experience as a Data and Cloud architecture with client stakeholders AZ Data Platform Expertise: Synapse, Databricks, Azure Data Factory (ADF), Azure SQL (DW/DB), Power BI (PBI). Define modernization roadmaps and target architecture. Strong understanding of data governance best practices for data quality, Cataloguing, and lineage. Proven ability to lead client engagements and present complex findings. Excellent communication skills, both written and verbal Extremely strong organizational and analytical skills with strong attention to detail Strong track record of excellent results delivered to internal and external clients Able to work independently without the needs for close supervision and collaboratively as part of cross-team efforts Experience with delivering projects within an agile environment Experience in project management and team management Key responsibilities include: Lead all interviews & workshops to capture current/future needs. Direct the technical review of Azure (AZ) infrastructure (Databricks, Synapse Analytics, Power BI) and critical on-premises (on-prem) systems. Come up with architecture designs (Arch. Designs), focusing on refined processing strategies and Microsoft Fabric. Understand and refine the Data Governance (Data Gov.) roadmap, including data cataloguing (Data Cat.), lineage, and quality. Lead project deliverables, ensuring actionable and strategic outputs. Evaluate and ensure quality of deliverables within project timelines Develop a strong understanding of equity market domain knowledge Collaborate with domain experts and business stakeholders to understand business rules/logics Ensure effective, efficient, and continuous communication (written and verbally) with global stakeholders Independently troubleshoot difficult and complex issues on dev, test, UAT and production environments Responsible for end-to-end delivery of projects, coordination between client and internal offshore teams and manage client queries Demonstrate high attention to detail, should work in a dynamic environment whilst maintaining high quality standards, a natural aptitude to develop good internal working relationships and a flexible work ethic Responsible for Quality Checks and adhering to the agreed Service Level Agreement (SLA) / Turn Around Time (TAT) Show more Show less
Posted 3 days ago
0 years
4 - 7 Lacs
Hyderābād
On-site
As a member of the Support organization, your focus is to deliver post-sales support and solutions to the Oracle customer base while serving as an advocate for customer needs. This involves resolving post-sales non-technical customer inquiries via phone and electronic means, as well as, technical questions regarding the use of and troubleshooting for our Electronic Support Services. A primary point of contact for customers, you are responsible for facilitating customer relationships with Support and providing advice and assistance to internal Oracle employees on diverse customer situations and escalated issues. The Fusion Supply Chain / Manufacturing Support Team is expanding to support our rapidly increasing customer base in the Cloud (SaaS), as well as growing numbers of on-premise accounts. The team partners with Oracle Development in supporting early adopters and many other new customers. This is a unique opportunity to be part of the future of Oracle Support and help shape the product and the organization to benefit our customers and our employees. This position is for supporting Fusion Applications, particularly under the Fusion SCM modules - Fusion SCM Planning, Fusion SCM Manufacturing, Fusion SCM Maintenance. Research, resolve and respond to complex issues across the Application product lines and product boundaries in accordance with current standards Demonstrate strong follow-through and consistently keep commitments to customers and employees Ensure that each and every customer is handled with a consummately professional attitude and the highest possible level of service Take ownership and responsibility for priority customer cases where and when required Review urgent and critical incidents for quality Queue reviews with analysts to ensure quality and efficiency of support Report high visibility cases, escalation, customer trends to management Act as information resource to the management team Contribute to an environment that encourages information sharing, team-based resolution activity, cross training and an absolute focus on resolving customer cases as quickly and effectively as possible Participate in projects that enhance the quality or efficiency of support Participate in system and release testing, as needed Act as a role model and mentor for other analysts Perform detailed technical analysis and troubleshooting using SQL, PL/SQL,Java, ADF, Redwood, VBCS, SOA and Rest API Participate in after hour support as required. Work with Oracle Development/Support Development for product related issues Demonstrate core competencies (employ sound business judgment, creative and innovative ways to solve problems, strong work ethic and do whatever it takes to get the job done) Knowledge of Business process and functional knowledge required for our support organization for Maintenance Module Asset Management: Oversee the entire lifecycle of physical assets to optimize utilization and visibility into maintenance operations.Track and manage enterprise-owned and customer-owned assets, including Install Base Assets. Preventive maintenance/Maintenance Program: Define and generate daily preventive maintenance forecasts for affected assets within maintenance-enabled organizations. Utilize forecasts to create preventive maintenance work orders, reducing workload for planners and enhancing program auditing, optimization, and exception management. Work Definition: Identify and manage Maintenance Work Areas based on physical, geographical, or logical groupings of work centers. Define and manage resources, work centers, and standard operations. Develop reusable operation templates (standard operations) detailing operation specifics and required resources. Apply standard operations to multiple maintenance work definitions and work orders. Work Order creation, scheduling and Dispatch: Track material usage and labour hours against planned activities. Manage component installation and removal. Conduct inspections and ensure seamless execution of work orders. Work Order Transactions: Apply knowledge of operation pull, assembly pull, and backflush concepts. Execute operation transactions to update dispatch status in count point operations. Manage re- sequenced operations within work order processes. Charge maintenance work orders for utilized resources and ensure accurate transaction recording. Technical skills required for our support organization for Maintenance Module SQL and PL/SQL REST API - creating, different methods and testing via POSTMAN Knowledge of JSON format Knowledge of WSDL, XML and SOAP Webservices Oracle SOA - Composites, Business Events, debugging via SOA Composite trace and logs Java and Oracle ADF Oracle Visual Builder Studio (good to have) Page Composer(Fusion Apps) : Customize existing UI (good to have) Application Composer(Fusion Apps) - sandbox, creating custom object and fields, dynamic page layout and Object Functions (good to have) Career Level - IC3 Responsibilities As a Sr. Support Engineer, you will be the technical interface to customer) for resolution of problems related to the maintenance and use of Oracle products. Have an understanding of all Oracle products in their competencies and in-depth knowledge of several products and/or platforms. Also, you should be highly experienced in multiple platforms and be able to complete assigned duties with minimal direction from management. In this position, you will routinely act independently while researching and developing solutions to customer issues. Research, resolve and respond to complex issues across the Application product lines and product boundaries in accordance with current standards Demonstrate strong follow-through and consistently keep commitments to customers and employees Ensure that each and every customer is handled with a consummately professional attitude and the highest possible level of service Take ownership and responsibility for priority customer cases where and when required Review urgent and critical incidents for quality Queue reviews with analysts to ensure quality and efficiency of support Report high visibility cases, escalation, customer trends to management Act as information resource to the management team Contribute to an environment that encourages information sharing, team-based resolution activity, cross training and an absolute focus on resolving customer cases as quickly and effectively as possible Participate in projects that enhance the quality or efficiency of support Participate in system and release testing, as needed Act as a role model and mentor for other analysts Perform detailed technical analysis and troubleshooting using SQL, Java, ADF, Redwood, VBCS, SOA and Rest API Participate in after hour support as required. Work with Oracle Development/Support Development for product related issues Demonstrate core competencies (employ sound business judgment, creative and innovative ways to solve problems, strong work ethic and do whatever it takes to get the job done) Knowledge of Business process and functional knowledge required for our support organization for Maintenance Module Asset Management: Oversee the entire lifecycle of physical assets to optimize utilization and visibility into maintenance operations.Track and manage enterprise-owned and customer-owned assets, including Install Base Assets. Preventive maintenance/Maintenance Program: Define and generate daily preventive maintenance forecasts for affected assets within maintenance-enabled organizations. Utilize forecasts to create preventive maintenance work orders, reducing workload for planners and enhancing program auditing, optimization, and exception management. Work Definition: Identify and manage Maintenance Work Areas based on physical, geographical, or logical groupings of work centers. Define and manage resources, work centers, and standard operations. Develop reusable operation templates (standard operations) detailing operation specifics and required resources. Apply standard operations to multiple maintenance work definitions and work orders. Work Order creation, scheduling and Dispatch: Track material usage and labour hours against planned activities. Manage component installation and removal. Conduct inspections and ensure seamless execution of work orders. Work Order Transactions: Apply knowledge of operation pull, assembly pull, and backflush concepts. Execute operation transactions to update dispatch status in count point operations. Manage re- sequenced operations within work order processes. Charge maintenance work orders for utilized resources and ensure accurate transaction recording. Technical skills required for our support organization for Maintenance Module SQL and PL/SQL REST API - creating, different methods and testing via POSTMAN Knowledge of JSON format Knowledge of WSDL, XML and SOAP Webservices Oracle SOA - Composites, Business Events, debugging via SOA Composite trace and logs Java and Oracle ADF Oracle Visual Builder Studio (good to have) Page Composer(Fusion Apps) : Customize existing UI (good to have) Application Composer(Fusion Apps) - sandbox, creating custom object and fields, dynamic page layout and Object Functions (good to have)
Posted 3 days ago
5.0 years
4 - 5 Lacs
Hyderābād
On-site
Job Description: At least 5+ years’ of relevant hands on development experience as Azure Data Engineering role Proficient in Azure technologies like ADB, ADF, SQL(capability of writing complex SQL queries), ADB, PySpark, Python, Synapse, Delta Tables, Unity Catalog Hands on in Python, PySpark or Spark SQL Hands on in Azure Analytics and DevOps Taking part in Proof of Concepts (POCs) and pilot solutions preparation Ability to conduct data profiling, cataloguing, and mapping for technical design and construction of technical data flows Experience in business processing mapping of data and analytics solutions Recruitment fraud is a scheme in which fictitious job opportunities are offered to job seekers typically through online services, such as false websites, or through unsolicited emails claiming to be from the company. These emails may request recipients to provide personal information or to make payments as part of their illegitimate recruiting process. DXC does not make offers of employment via social media networks and DXC never asks for any money or payments from applicants at any point in the recruitment process, nor ask a job seeker to purchase IT or other equipment on our behalf. More information on employment scams is available here .
Posted 3 days ago
10.0 years
0 Lacs
India
On-site
Company Description 👋🏼 We're Nagarro. We are a Digital Product Engineering company that is scaling in a big way! We build products, services, and experiences that inspire, excite, and delight. We work at scale — across all devices and digital mediums, and our people exist everywhere in the world (18000+ experts across 38 countries, to be exact). Our work culture is dynamic and non-hierarchical. We are looking for great new colleagues. That's where you come in! Job Description REQUIREMENTS: Total experience: 10+ Years Strong experience in delivering data engineering projects with Python. Strong proficiency in Python for data analysis and scripting. Hands-On experience in AWS technologies (Azure ADF, Synapse etc.), Have strong knowledge in ETL, Data warehousing, Business intelligence Proficient in designing and developing data integration workflows. Strong experience with Azure Synapse Analytics for data warehousing. Solid experience with Databricks for big data processing. Experience in managing complex and technical development projects in the areas of ETL, Datawarehouse & BI. Excellent problem-solving skills, strong communication abilities, and a collaborative mindset. Relevant certifications in Azure or data engineering are a plus RESPONSIBILITIES: Understanding the client’s business use cases and technical requirements and be able to convert them into technical design which elegantly meets the requirements. Mapping decisions with requirements and be able to translate the same to developers. Identifying different solutions and being able to narrow down the best option that meets the client’s requirements. Defining guidelines and benchmarks for NFR considerations during project implementation Writing and reviewing design document explaining overall architecture, framework, and high-level design of the application for the developers Reviewing architecture and design on various aspects like extensibility, scalability, security, design patterns, user experience, NFRs, etc., and ensure that all relevant best practices are followed. Developing and designing the overall solution for defined functional and non-functional requirements; and defining technologies, patterns, and frameworks to materialize it Understanding and relating technology integration scenarios and applying these learnings in projects Resolving issues that are raised during code/review, through exhaustive systematic analysis of the root cause, and being able to justify the decision taken. Carrying out POCs to make sure that suggested design/technologies meet the requirements. Qualifications Bachelor’s or master’s degree in computer science, Information Technology, or a related field. Show more Show less
Posted 3 days ago
3.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
We are looking for passionate engineers who designs, develops, codes and customizes software applications from product conception to end user interface. The person should be able to Analyz and understand customer requirements and preferences, & incorporate these into the design and development process. About You – Experience, Education, Skills, And Accomplishments Bachelors’ degree or higher in related field, such as Computer Engineering or Computer Science, plus at least 3 years of software development experience, or equivalent combination of education and experience. At least 3 years’ experience working with E-Business Suite; specifically, with financials, order management, service contracts, inventory, Accounts Receivables and Advanced pricing modules. At least 3 yrs experience performance tuning in E-Business Suite. Experience developing custom components using OAF and ADF workflow, developing solutions using Oracle Apex. Experience integrating data from Oracle eBS to Sales force and working with AIM and formulating strategies for implementation. Expert knowledge of Oracle Applications interfaces, tables, and APIs. Expertise in RICE (developed new Reports, Interface, Customization, Extensions, and form personalization). It would be great, if you also have … Experience in web technologies like HTML, JavaScript, CSS, JQuery Proficiency in Java with ability to write clean, efficient and maintainable code in Java Experience in designing, developing and maintaining Java applications Sound knowledge of Object-Oriented Programming (OOP) concepts (Optionally) Experience in AngularJS and Angular What will you be doing in this role? Write clean, efficient, and maintainable code in accordance with coding standards. Review other code to ensure clean, efficient, and maintainable code. Defines architecture of software solution. Suggests alternative methodologies or techniques to achieving desired results. Develops and maintains understanding of software development lifecycle and delivery methodology. Reviews and revises new procedures as needed for the continuing development of high-quality systems. Maintains knowledge of technical advances and evaluates new hardware / software for company use. Follows departmental policies, procedures, and work instructions. Works closely with higher-level engineers to increase functional knowledge. Automate tests and unit tests all assigned applications. Participates as a team member on various engineering projects. Writes application technical documentation. About The Team The position is for Finance team within the Enterprise Services organization, a dynamic and collaborative group focused on supporting the company’s key finance applications, including order to cash functions, invoice delivery, cash collections, service contracts, third-party integrations, and the general ledger. This team ensures seamless and efficient financial processes, maintaining healthy cash flow and accurate financial reporting. The team is committed to continuous improvement, leveraging the latest technologies and best practices. Join a team that values collaboration, innovation, and excellence in supporting the company's financial operations and strategic goals. At Clarivate, we are committed to providing equal employment opportunities for all qualified persons with respect to hiring, compensation, promotion, training, and other terms, conditions, and privileges of employment. We comply with applicable laws and regulations governing non-discrimination in all locations. Show more Show less
Posted 3 days ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
About the Role We are looking for a skilled and motivated Data Analyst with 2–5 years of experience to join our team. In this role, you will work closely with the product team to support strategic decision-making by delivering data-driven insights, dashboards, and performance reports. Your ability to transform raw data into actionable insights will directly impact on how we build and improve our products. Key Responsibilities Collaborate with the product team to understand data needs and define key performance indicators (KPIs) Develop and maintain insightful reports and dashboards using Power BI Write efficient and optimized SQL queries to extract and manipulate data from multiple sources Perform data analysis using Python and pandas for deeper trend analysis and data modeling Present findings clearly through visualizations and written summaries to stakeholders Ensure data quality and integrity across reporting pipelines Contribute to ongoing improvements in data processes and tooling Required Skills & Experience 2–5 years of hands-on experience as a Data Analyst or in a similar role Strong proficiency in SQL for querying and data manipulation Experience in building interactive dashboards with Power BI Good command of Python , especially with pandas for data wrangling and analysis Experience with Databricks or working with big data tools Understanding of Medallion Architecture and its application in analytics pipelines Strong communication and collaboration skills, especially in cross-functional team settings Good to Have Familiarity with data engineering practices , including: Data transformation using Databricks notebooks Apache Spark SQL for distributed data processing Azure Data Factory (ADF) for orchestration Version control using Git Exposure to product analytics, cohort analysis, or A/B testing methodologies Interested candidates please share your resume with balaji.kumar@flyerssoft.com Show more Show less
Posted 3 days ago
6.0 years
0 Lacs
India
Remote
AI/ML Engineer – Senior Consultant AI Engineering Group is part of Data Science & AI Competency Center and is focusing technical and engineering aspects of DS/ML/AI solutions. We are looking for experienced AI/ML Engineers to join our team to help us bring AI/ML solutions into production, automate processes, and define reusable best practices and accelerators. Duties description: The person we are looking for will become part of DataScience and AI Competency Center working in AI Engineering team. The key duties are: Building high-performing, scalable, enterprise-grade ML/AI applications in cloud environment Working with Data Science, Data Engineering and Cloud teams to implement Machine Learning models into production Practical and innovative implementations of ML/AI automation, for scale and efficiency Design, delivery and management of industrialized processing pipelines Defining and implementing best practices in ML models life cycle and ML operations Implementing AI/MLOps frameworks and supporting Data Science teams in best practices Gathering and applying knowledge on modern techniques, tools and frameworks in the area of ML Architecture and Operations Gathering technical requirements & estimating planned work Presenting solutions, concepts and results to internal and external clients Being Technical Leader on ML projects, defining task, guidelines and evaluating results Creating technical documentation Supporting and growing junior engineers Must have skills: Good understanding of ML/AI concepts: types of algorithms, machine learning frameworks, model efficiency metrics, model life-cycle, AI architectures Good understanding of Cloud concepts and architectures as well as working knowledge with selected cloud services, preferably GCP Experience in programming ML algorithms and data processing pipelines using Python At least 6-8 years of experience in production ready code development Experience in designing and implementing data pipelines Practical experience with implementing ML solutions on GCP Vertex.AI and/or Databricks Good communication skills Ability to work in team and support others Taking responsibility for tasks and deliverables Great problem-solving skills and critical thinking Fluency in written and spoken English. Nice to have skills & knowledge: Practical experience with other programming languages: PySpark, Scala, R, Java Practical experience with tools like AirFlow, ADF or Kubeflow Good understanding of CI/CD and DevOps concepts, and experience in working with selected tools (preferably GitHub Actions, GitLab or Azure DevOps) Experience in applying and/or defining software engineering best practices Experience productization ML solutions using technologies like Docker/Kubernetes We Offer: Stable employment. On the market since 2008, 1300+ talents currently on board in 7 global sites. 100% remote. Flexibility regarding working hours. Full-time position Comprehensive online onboarding program with a “Buddy” from day 1. Cooperation with top-tier engineers and experts. Internal Gallup Certified Strengths Coach to support your growth. Unlimited access to the Udemy learning platform from day 1. Certificate training programs. Lingarians earn 500+ technology certificates yearly. Upskilling support. Capability development programs, Competency Centers, knowledge sharing sessions, community webinars, 110+ training opportunities yearly. Grow as we grow as a company. 76% of our managers are internal promotions. A diverse, inclusive, and values-driven community. Autonomy to choose the way you work. We trust your ideas. Create our community together. Refer your friends to receive bonuses. Activities to support your well-being and health. Plenty of opportunities to donate to charities and support the environment. Please click on this link to submit your application: https://system.erecruiter.pl/FormTemplates/RecruitmentForm.aspx?WebID=ac709bd295cc4008af7d0a7a0e465818 Show more Show less
Posted 3 days ago
4.0 - 12.0 years
0 Lacs
Kolkata, West Bengal, India
On-site
Greetings from TCS! We are looking for Oracle EBS Technical Experience : 4 - 12 Years Location : Kolkata Must Have: Oracle EBS Technical Responsibility of / Expectations from the Role Experience in full lifecycle software projects – including client/server and web Applications, with responsibilities ranging from system analysis, design, development, unit testing, and documentation. Other Technical skills like SOA, ADF and UNIX shell scripting. Extensive experience in writing Packages, stored functions, stored procedures, triggers and very strong in PL/SQL. Extensively worked on Oracle APIs, Forms6i/10g and Reports6i/10g with Oracle Database10g/11g, Discoverer, XML/BI Publisher, Workflows and Web ADI. Having good exposure in Oracle AIM Methodology means the preparation of documents like MD050, MD070, CV040, CV060, TE020 and MD120. Ability to learn Domain Knowledge related to the application in short period of time. Show more Show less
Posted 3 days ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Title: Data Engineer – Databricks, Delta Live Tables, Data Pipelines Location: Bhopal / Hyderabad / Pune (On-site) Experience Required: 5+ Years Employment Type: Full-Time Job Summary: We are seeking a skilled and experienced Data Engineer with a strong background in designing and building data pipelines using Databricks and Delta Live Tables. The ideal candidate should have hands-on experience in managing large-scale data engineering workloads and building scalable, reliable data solutions in cloud environments. Key Responsibilities: Design, develop, and manage scalable and efficient data pipelines using Databricks and Delta Live Tables . Work with structured and unstructured data to enable analytics and reporting use cases. Implement data ingestion , transformation , and cleansing processes. Collaborate with Data Architects, Analysts, and Data Scientists to ensure data quality and integrity. Monitor data pipelines and troubleshoot issues to ensure high availability and performance. Optimize queries and data flows to reduce costs and increase efficiency. Ensure best practices in data security, governance, and compliance. Document architecture, processes, and standards. Required Skills: Minimum 5 years of hands-on experience in data engineering . Proficient in Apache Spark , Databricks , Delta Lake , and Delta Live Tables . Strong programming skills in Python or Scala . Experience with cloud platforms such as Azure , AWS , or GCP . Proficient in SQL for data manipulation and analysis. Experience with ETL/ELT pipelines , data wrangling , and workflow orchestration tools (e.g., Airflow, ADF). Understanding of data warehousing , big data ecosystems , and data modeling concepts. Familiarity with CI/CD processes in a data engineering context. Nice to Have: Experience with real-time data processing using tools like Kafka or Kinesis. Familiarity with machine learning model deployment in data pipelines. Experience working in an Agile environment. Show more Show less
Posted 3 days ago
3.0 years
0 Lacs
India
Remote
Title: Azure Data Engineer Location: Remote Employment type: Full Time with BayOne We’re looking for a skilled and motivated Data Engineer to join our growing team and help us build scalable data pipelines, optimize data platforms, and enable real-time analytics. What You'll Do Design, develop, and maintain robust data pipelines using tools like Databricks, PySpark, SQL, Fabric, and Azure Data Factory Collaborate with data scientists, analysts, and business teams to ensure data is accessible, clean, and actionable Work on modern data lakehouse architectures and contribute to data governance and quality frameworks Tech Stack Azure | Databricks | PySpark | SQL What We’re Looking For 3+ years experience in data engineering or analytics engineering Hands-on with cloud data platforms and large-scale data processing Strong problem-solving mindset and a passion for clean, efficient data design Job Description: Min 3 years of experience in modern data engineering/data warehousing/data lakes technologies on cloud platforms like Azure, AWS, GCP, Data Bricks etc. Azure experience is preferred over other cloud platforms. 5 years of proven experience with SQL, schema design and dimensional data modelling Solid knowledge of data warehouse best practices, development standards and methodologies Experience with ETL/ELT tools like ADF, Informatica, Talend etc., and data warehousing technologies like Azure Synapse, Microsoft Fabric, Azure SQL, Amazon redshift, Snowflake, Google Big Query etc. Strong experience with big data tools (Databricks, Spark etc..) and programming skills in PySpark and Spark SQL. Be an independent self-learner with “let’s get this done” approach and ability to work in Fast paced and Dynamic environment. Excellent communication and teamwork abilities. Nice-to-Have Skills: Event Hub, IOT Hub, Azure Stream Analytics, Azure Analysis Service, Cosmo DB knowledge. SAP ECC /S/4 and Hana knowledge. Intermediate knowledge on Power BI Azure DevOps and CI/CD deployments, Cloud migration methodologies and processes BayOne is an Equal Opportunity Employer and does not discriminate against any employee or applicant for employment because of race, color, sex, age, religion, sexual orientation, gender identity, status as a veteran, and basis of disability or any federal, state, or local protected class. This job posting represents the general duties and requirements necessary to perform this position and is not an exhaustive statement of all responsibilities, duties, and skills required. Management reserves the right to revise or alter this job description. Show more Show less
Posted 4 days ago
8.0 years
0 Lacs
Greater Kolkata Area
On-site
Location: PAN India Duration: 6 Months Experience Required: 7–8 years Job Summary We are looking for an experienced SSAS Developer with strong expertise in developing both OLAP and Tabular models using SQL Server Analysis Services (SSAS), alongside advanced ETL development skills using tools like SSIS, Informatica, or Azure Data Factory . The ideal candidate will be well-versed in T-SQL , dimensional modeling, and building high-performance, scalable data solutions. Key Responsibilities Design, build, and maintain SSAS OLAP cubes and Tabular models Create complex DAX and MDX queries for analytical use cases Develop robust ETL workflows and pipelines using SSIS, Informatica, or ADF Collaborate with cross-functional teams to translate business requirements into BI solutions Optimize SSAS models for scalability and performance Implement best practices in data modeling, version control, and deployment automation Support dashboarding and reporting needs via Power BI, Excel, or Tableau Maintain and troubleshoot data quality, performance, and integration issues Must-Have Skills Hands-on experience with SSAS (Tabular & Multidimensional) Proficient in DAX, MDX, and T-SQL Advanced ETL skills using SSIS / Informatica / Azure Data Factory Knowledge of dimensional modeling (star & snowflake schema) Experience with Azure SQL / MS SQL Server Familiarity with Git and CI/CD pipelines Nice to Have Exposure to cloud data platforms (Azure Synapse, Snowflake, AWS Redshift) Working knowledge of Power BI or similar BI tools Understanding of Agile/Scrum methodology Bachelor's degree in Computer Science, Information Systems, or equivalent Show more Show less
Posted 4 days ago
8.0 years
0 Lacs
Greater Kolkata Area
On-site
Location: PAN India Duration: 6 Months Experience Required: 7–8 years Job Summary We are looking for an experienced SSAS Developer with strong expertise in developing both OLAP and Tabular models using SQL Server Analysis Services (SSAS), alongside advanced ETL development skills using tools like SSIS, Informatica, or Azure Data Factory . The ideal candidate will be well-versed in T-SQL , dimensional modeling, and building high-performance, scalable data solutions. Key Responsibilities Design, build, and maintain SSAS OLAP cubes and Tabular models Create complex DAX and MDX queries for analytical use cases Develop robust ETL workflows and pipelines using SSIS, Informatica, or ADF Collaborate with cross-functional teams to translate business requirements into BI solutions Optimize SSAS models for scalability and performance Implement best practices in data modeling, version control, and deployment automation Support dashboarding and reporting needs via Power BI, Excel, or Tableau Maintain and troubleshoot data quality, performance, and integration issues Must-Have Skills Hands-on experience with SSAS (Tabular & Multidimensional) Proficient in DAX, MDX, and T-SQL Advanced ETL skills using SSIS / Informatica / Azure Data Factory Knowledge of dimensional modeling (star & snowflake schema) Experience with Azure SQL / MS SQL Server Familiarity with Git and CI/CD pipelines Nice to Have Exposure to cloud data platforms (Azure Synapse, Snowflake, AWS Redshift) Working knowledge of Power BI or similar BI tools Understanding of Agile/Scrum methodology Bachelor's degree in Computer Science, Information Systems, or equivalent Show more Show less
Posted 4 days ago
8.0 years
0 Lacs
Greater Kolkata Area
On-site
Location: PAN India Duration: 6 Months Experience Required: 7–8 years Job Summary We are looking for an experienced SSAS Developer with strong expertise in developing both OLAP and Tabular models using SQL Server Analysis Services (SSAS), alongside advanced ETL development skills using tools like SSIS, Informatica, or Azure Data Factory . The ideal candidate will be well-versed in T-SQL , dimensional modeling, and building high-performance, scalable data solutions. Key Responsibilities Design, build, and maintain SSAS OLAP cubes and Tabular models Create complex DAX and MDX queries for analytical use cases Develop robust ETL workflows and pipelines using SSIS, Informatica, or ADF Collaborate with cross-functional teams to translate business requirements into BI solutions Optimize SSAS models for scalability and performance Implement best practices in data modeling, version control, and deployment automation Support dashboarding and reporting needs via Power BI, Excel, or Tableau Maintain and troubleshoot data quality, performance, and integration issues Must-Have Skills Hands-on experience with SSAS (Tabular & Multidimensional) Proficient in DAX, MDX, and T-SQL Advanced ETL skills using SSIS / Informatica / Azure Data Factory Knowledge of dimensional modeling (star & snowflake schema) Experience with Azure SQL / MS SQL Server Familiarity with Git and CI/CD pipelines Nice to Have Exposure to cloud data platforms (Azure Synapse, Snowflake, AWS Redshift) Working knowledge of Power BI or similar BI tools Understanding of Agile/Scrum methodology Bachelor's degree in Computer Science, Information Systems, or equivalent Show more Show less
Posted 4 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2