Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
4.0 - 9.0 years
5 - 8 Lacs
Gurugram
Work from Office
Requirements : We are seeking a proactive and technically strong Site Reliability Engineer (SRE) to ensure the stability, performance, and scalability of our Data Engineering Platform. You will work on cutting-edge technologies including Cloudera Hadoop, Spark, Airflow, NiFi, and Kubernetes ensuring high availability and driving automation to support massive-scale data workloads, especially in the telecom domain. Key Responsibilities • Ensure platform uptime and application health as per SLOs/KPIs • Monitor infrastructure and applications using ELK, Prometheus, Zabbix, etc. • Debug and resolve complex production issues, performing root cause analysis • Automate routine tasks and implement self-healing systems • Design and maintain dashboards, alerts, and operational playbooks • Participate in incident management, problem resolution, and RCA documentation • Own and update SOPs for repeatable processes • Collaborate with L3 and Product teams for deeper issue resolution • Support and guide L1 operations team • Conduct periodic system maintenance and performance tuning • Respond to user data requests and ensure timely resolution • Address and mitigate security vulnerabilities and compliance issues Technical Skillset • Hands-on with Spark, Hive, Cloudera Hadoop, Kafka, Ranger • Strong Linux fundamentals and scripting (Python, Shell) • Experience with Apache NiFi, Airflow, Yarn, and Zookeeper • Proficient in monitoring and observability tools: ELK Stack, Prometheus, Loki • Working knowledge of Kubernetes, Docker, Jenkins CI/CD pipelines • Strong SQL skills (Oracle/Exadata preferred)
Posted 3 weeks ago
7 years
0 Lacs
Hyderabad, Telangana, India
On-site
Role: Lead Data Engineer (Hadoop, Hive, Python, SQL, Spark or PySpark) Location: Hyderabad Experience: 7+ Years Role and responsibilities Strong technical, analytical, and problem-solving skills Strong organizational skills, with the ability to work autonomously as well as in a team-based environment Data pipeline framework development Technical skills requirements CDH On-premise for data processing and extraction Ability to own and deliver on large, multi-faceted projects Fluency in complex SQL and experience with RDBMSs Project Experience in CDH experience, Spark, PySpark, Scala, Python, NiFi, Hive, NoSql DBs) Experience designing and building big data pipelines Experience working on large scale, distributed systems Experience working on any Databricks would be added advantage Strong hands-on experience of programming language like PySpark, Scala with Spark, Python. Exposure to various ETL and Business Intelligence tools Experience in shell scripting to automate pipeline execution. Solid grounding in Agile methodologies Experience with git and other source control systems Strong communication and presentation skills Regards, Manvendra Singh manvendra.singh1@incedoinc.com Show more Show less
Posted 4 weeks ago
0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
The Branding Club, a dynamic and forward-thinking branding agency, is embarking on an ambitious project to redefine the boundaries of digital branding. In collaboration with Hard.Coded, a leader in creating software development teams, we are assembling a high-performance team of around 17-20 FTE to deliver a groundbreaking product from the ground up. This project is not just about meeting expectations but exceeding them, setting new benchmarks for quality and innovation in the industry. Role Overview We are seeking a highly skilled and experienced Data Engineer to join our dynamic team. In this role, you will be crucial in managing and optimizing the data infrastructure required to support our cutting-edge applications. You will work closely with our development and product teams to ensure our data architecture aligns with the company's goals and supports the needs of our applications. Key Responsibilities Design and implement data pipelines to collect, process, and store large datasets. Develop and maintain scalable ETL (Extract, Transform, Load) processes. Collaborate with software engineers and data scientists to understand data requirements and deliver solutions. Optimize and maintain data architecture, ensuring data quality, integrity, and availability. Implement data governance and security measures to protect sensitive information. Monitor and troubleshoot data pipelines to ensure reliable operation. Stay up-to-date with the latest industry trends and technologies in data engineering. Qualifications Proven experience with SQL and NoSQL databases (e.g., MySQL, PostgreSQL, MongoDB). Experience with data warehousing solutions (e.g., Redshift, BigQuery, Snowflake). Strong experience with ETL tools and frameworks (e.g., Apache NiFi, Airflow, Talend). Proficiency in programming languages such as Python, Java, or Scala. Familiarity with cloud platforms (e.g., AWS, Azure, GCP) and related data services. Knowledge of big data technologies (e.g., Hadoop, Spark, Kafka). Strong understanding of data modeling and database design. Excellent problem-solving skills and attention to detail. Strong communication skills and the ability to work effectively in a team environment. Nice to Have (but not mandatory) Experience with containerization and orchestration tools (e.g., Docker, Kubernetes). Knowledge of machine learning pipelines and model deployment. Familiarity with data visualization tools (e.g., Tableau, Power BI). Additional Requirements To be considered for this position, candidates must complete an assessment. Preferred start date around 1st Sep. This position is onsite around Kennedy Bridge in Mumbai. Show more Show less
Posted 4 weeks ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Key Responsibilities Work closely with clients to understand their business requirements and design data solutions that meet their needs. Develop and implement end-to-end data solutions that include data ingestion, data storage, data processing, and data visualization components. Design and implement data architectures that are scalable, secure, and compliant with industry standards. Work with data engineers, data analysts, and other stakeholders to ensure the successful delivery of data solutions. Participate in presales activities, including solution design, proposal creation, and client presentations. Act as a technical liaison between the client and our internal teams, providing technical guidance and expertise throughout the project lifecycle. Stay up-to-date with industry trends and emerging technologies related to data architecture and engineering. Develop and maintain relationships with clients to ensure their ongoing satisfaction and identify opportunities for additional business. Understands Entire End to End AI Life Cycle starting from Ingestion to Inferencing along with Operations. Exposure to Gen AI Emerging technologies. Exposure to Kubernetes Platform and hands on deploying and containorizing Applications. Good Knowledge on Data Governance, data warehousing and data modelling. Requirements Bachelor's or Master's degree in Computer Science, Data Science, or related field. 10+ years of experience as a Data Solution Architect, with a proven track record of designing and implementing end-to-end data solutions. Strong technical background in data architecture, data engineering, and data management. Extensive experience on working with any of the hadoop flavours preferably Data Fabric. Experience with presales activities such as solution design, proposal creation, and client presentations. Familiarity with cloud-based data platforms (e.g., AWS, Azure, Google Cloud) and related technologies such as data warehousing, data lakes, and data streaming. Experience with Kubernetes and Gen AI tools and tech stack. Excellent communication and interpersonal skills, with the ability to effectively communicate technical concepts to both technical and non-technical audiences. Strong problem-solving skills, with the ability to analyze complex data systems and identify areas for improvement. Strong project management skills, with the ability to manage multiple projects simultaneously and prioritize tasks effectively. Tools and Tech Stack Hadoop Ecosystem Data Architecture and Engineering: Preferred: Cloudera Data Platform (CDP) or Data Fabric. Tools: HDFS, Hive, Spark, HBase, Oozie. Data Warehousing Cloud-based: Azure Synapse, Amazon Redshift, Google Big Query, Snowflake, Azure Synapsis and Azure Data Bricks On-premises: , Teradata, Vertica Data Integration And ETL Tools Apache NiFi, Talend, Informatica, Azure Data Factory, Glue. Cloud Platforms Azure (preferred for its Data Services and Synapse integration), AWS, or GCP. Cloud-native Components Data Lakes: Azure Data Lake Storage, AWS S3, or Google Cloud Storage. Data Streaming: Apache Kafka, Azure Event Hubs, AWS Kinesis. HPE Platforms Data Fabric, AI Essentials or Unified Analytics, HPE MLDM and HPE MLDE AI And Gen AI Technologies AI Lifecycle Management: MLOps: MLflow, KubeFlow, Azure ML, or SageMaker, Ray Inference tools: TensorFlow Serving, K Serve, Seldon Generative AI Frameworks: Hugging Face Transformers, LangChain. Tools: OpenAI API (e.g., GPT-4) Kubernetes Orchestration and Deployment: Platforms: Azure Kubernetes Service (AKS)or Amazon EKS or Google Kubernetes Engine (GKE) or Open Source K8 Tools: Helm CI/CD For Data Pipelines And Applications Jenkins, GitHub Actions, GitLab CI, or Azure DevOps Show more Show less
Posted 4 weeks ago
7 - 9 years
0 Lacs
Pune, Maharashtra, India
On-site
About Improzo At Improzo ( Improve + Zoe; meaning Life in Greek ), we believe in improving life by empowering our customers. Founded by seasoned Industry leaders, we are laser focused on delivering quality-led commercial analytical solutions to our clients. Our dedicated team of experts in commercial data, technology, and operations has been evolving and learning together since our inception. Here, you won't find yourself confined to a cubicle; instead, you'll be navigating open waters, collaborating with brilliant minds to shape the future. You will work with leading Life Sciences clients, seasoned leaders and carefully chosen peers like you! People are at the heart of our success, so we have defined our CARE values framework with a lot of effort, and we use it as our guiding light in everything we do. We CARE ! Customer-Centric: Client success is our success. Prioritize customer needs and outcomes in every action. Adaptive: Agile and Innovative, with a growth mindset. Pursue bold and disruptive avenues that push the boundaries of possibilities. Respect: Deep respect for our clients & colleagues. Foster a culture of collaboration and act with honesty, transparency, and ethical responsibility. Execution: Laser focused on quality-led execution; we deliver! Strive for the highest quality in our services, solutions, and customer experiences. About The Role Introduction: We are seeking an experienced and highly skilled Data Architect to lead a strategic project focused on Pharma Commercial Data Management Operations. This role demands a professional with 7-9 years of experience in data architecture, data management, ETL, data transformation, and governance, with an emphasis on providing scalable and secure data solutions for the pharmaceutical sector. The ideal candidate will bring a deep understanding of data architecture principles, experience with cloud platforms such as Snowflake, and a solid background in driving commercial data management projects. If you're passionate about leading impactful data initiatives, optimizing data workflows, and supporting the pharmaceutical industry's data needs, we invite you to apply. Responsibilities Key Responsibilities: Lead Data Architecture and Strategy: Design, develop, and implement the overall data architecture for commercial data management operations within the pharmaceutical business. Lead the design and operations of scalable and secure data systems that meet the specific needs of the pharma commercial team, including marketing, sales, and operations. Define and implement best practices for data architecture, ensuring alignment with business goals and technical requirements. Develop a strategic data roadmap for efficient data management and integration across multiple platforms and systems. Data Integration, ETL & Transformation: Oversee the ETL (Extract, Transform, Load) processes to ensure seamless integration and transformation of data from multiple sources, including commercial, sales, marketing, and regulatory databases. Collaborate with data engineers and developers to design efficient and automated data pipelines for processing large volumes of data. Lead efforts to optimize data workflows and improve data transformation processes to enhance reporting and analytics capabilities. Data Governance & Quality Assurance: Implement and enforce data governance standards across the data management ecosystem, ensuring the consistency, accuracy, and integrity of commercial data. Develop and maintain policies for data stewardship, data security, and compliance with industry regulations, such as HIPAA, GDPR, and other pharma-specific compliance requirements. Work closely with business stakeholders to ensure the proper definition of master data and reference data standards. Cloud Platform Expertise (Snowflake (critical to have), AWS, Azure): Lead the adoption and utilization of cloud-based data platforms, particularly Snowflake, to support data warehousing, analytics, and business intelligence needs. Collaborate with cloud infrastructure teams to ensure efficient management of data storage, compute resources, and performance optimization within cloud environments. Stay up-to-date with the latest cloud technologies, such as Snowflake, AWS, Azure, or Google Cloud (optional)), and evaluate opportunities for incorporating them into data architectures. Collaboration with Cross-functional Teams: Work closely with business leaders in commercial operations, analytics, and IT teams to understand their data needs and provide strategic data solutions that enhance business operations. Collaborate with data scientists, analysts, and business intelligence teams to ensure data is available for reporting, analysis, and decision-making. Facilitate communication between IT, business stakeholders, and external vendors to ensure data architecture solutions align with business requirements. Continuous Improvement & Innovation: Drive continuous improvement efforts to optimize data pipelines, data storage, and analytics workflows. Identify opportunities to improve data quality, streamline processes, and enhance the efficiency of data management operations. Advocate for the adoption of new data management technologies, tools, and methodologies to improve data processing, security, and integration. Leadership and Mentorship: Lead and mentor a team of data engineers, analysts, and other technical resources, fostering a collaborative and innovative work environment. Provide leadership in setting clear goals, performance metrics, and expectations for the team. Offer guidance on data architecture best practices, ensuring all team members are aligned with the organization’s data strategy. Required Qualifications Bachelor’s degree in Computer Science, Data Science, Information Systems, or a related field. 7-9 years of experience in data architecture, data management, and data governance, with a proven track record of leading commercial data management operations projects. Extensive experience in data integration, ETL, and data transformation processes, including familiarity with tools like Informatica, Talend, or Apache NiFi. Strong expertise with cloud platforms, particularly Snowflake, AWS, Azure, or Google Cloud. Strong knowledge of data governance frameworks, including data security, privacy regulations, and compliance standards in the pharmaceutical industry (e.g., HIPAA, GDPR). Hands-on experience in designing scalable and efficient data architecture solutions to support business intelligence, analytics, and reporting needs. Proficient in SQL and other query languages, with a solid understanding of database management and optimization techniques. Ability to communicate technical concepts effectively to non-technical stakeholders and align data strategies with business goals. Preferred Qualifications Experience in the pharmaceutical or life sciences sector, particularly in commercial data management, sales, marketing, or operations. Certification or formal training in cloud platforms (e.g., Snowflake, AWS, Azure) or data management frameworks. Familiarity with data science methodologies, machine learning, and advanced analytics tools. Knowledge of Agile methodologies for managing data projects. Key Skills Data Architecture & Design Cloud Platforms (Snowflake – critical to have) Data Governance & Quality Assurance ETL & Data Transformation Data Integration & Pipelines Pharmaceutical Data Management (Preferred) SQL & Database Optimization Leadership & Mentorship Business & Technical Collaboration Benefits Competitive salary and benefits package. Opportunity to work on cutting-edge tech projects, transforming the life sciences industry Collaborative and supportive work environment. Opportunities for professional development and growth. Skills: snowflake,database,data governance & quality assurance,data integration & pipelines,etl & data transformation,azure,business & technical collaboration,aws,data management,analytics,sql,sql & database optimization,cloud platforms (snowflake),leadership & mentorship,cloud platforms,data architecture & design,data Show more Show less
Posted 4 weeks ago
0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Require candidates with 5+ years of experience in DATA 2. Candidate must have skills Apache Nifi 3. Desirable skills Kafka, Airflow, Impala, Snowflake Show more Show less
Posted 4 weeks ago
0 years
0 Lacs
Thane, Maharashtra, India
On-site
Job Requirements Role/ Job Title: Data Architect Business: New Age Function/ Department: Data & Analytics Place of Work: Mumbai Roles & Responsibilities 'Developing and implementing an overall organizational data strategy that is in line with business processes. The strategy includes data model designs, database development standards, implementation and management of data warehouses and data analytics systems. Identifying data sources, both internal and external, and working out a plan for data management that is aligned with organizational data strategy. Coordinating and collaborating with cross-functional teams, stakeholders, and vendors for the smooth functioning of the enterprise data system. Managing end-to-end data architecture, from selecting the platform, designing the technical architecture, and developing the application to finally testing and implementing the proposed solution. Planning and execution of big data solutions using technologies such as Spark, Hadoop, AWS, NIFI, KAFKA and Airflow. Proficiency in data modeling and design, including SQL development and database administration. Secondary Responsibilities 'Data mining, visualization, and Machine Learning skills. Give training and mentorship to team members to make them better on the job Key Success Metrics 'Successfully deliver projects on committed time. Lead and design technical aspects of the projects. Timely update of the jira and documentations Show more Show less
Posted 4 weeks ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description Oracle Customer Success Services Building on the mindset that "Who knows Oracle …. better than Oracle?" Oracle Customer Success Services assists customers with their requirements for some of the most cutting-edge applications and solutions by utilizing the strengths of more than two decades of expertise in developing mission-critical solutions for enterprise customers and combining it with cutting-edge technology to provide our customers' speed, flexibility, resiliency, and security to enable customers to optimize their investment, minimize risk, and achieve more. The business was established with an entrepreneurial mindset and supports a vibrant, imaginative, and highly varied workplace. We are free of obligations, so we'll need your help to turn it into a premier engineering hub that prioritizes quality. Why? Oracle Customer Success Services Engineering is responsible for designing, building, and managing cutting-edge solutions, services, and core platforms to support the managed cloud business including but not limited to OCI, Oracle SaaS, and Oracle Enterprise Applications. This position is for the CSS Engineering Team, and we are searching for the finest and brightest technologists as we begin on the road of cloud-native digital transformation. We operate under a garage culture, rely on cutting-edge technology in our daily work, and provide a highly innovative, creative, and experimental work environment. We prefer to innovate and move quickly, putting a strong emphasis on scalability and robustness. We need your assistance to build a top-tier engineering team that has a significant influence. What? As a senior member of the team, you lead as well as provide hands-on in designing and developing software products, services, and platforms, as well as creating, testing, and managing the systems and applications we create in line with the architecture patterns and standards. You will be expected to advocate for the adoption of software architecture and design patterns among cross-functional teams both within and outside of engineering roles. You will also be expected to act as a mentor and advisor to the team(s) within the software domain as a leader. As we push for digital transformation throughout the organization, you will constantly be expected to think creatively and optimize and harmonize business processes. Required Qualifications: Master’s or Bachelors in Computer Science, or a closely related field. 10+ years of experience in software development, data science, and data engineering design. Advanced proficiency in Python and frameworks such as FastAPI and Dapr. Demonstrated ability to write full-stack applications using polyglot programming with languages/frameworks like FastAPI, Python, and Golang. Familiarity with OOP design principles (SOLID, DRY, KISS, Common Closure, and Module Encapsulation). Proven ability to design software systems using various design patterns (Creational, Structural, and Behavioral). Strong interpersonal skills and the ability to effectively communicate with business stakeholders. Demonstrated ability to drive technology adoption in AIML Solutions and CNCF software stack. Experience with real-time distributed systems using streaming data with Kafka, NiFi, or Pulsar. Strong expertise in software design concepts, patterns (e.g., 12-Factor Apps), and tools to create CNCF-compliant software with hands-on knowledge of containerization technologies like Docker and Kubernetes. Proven ability to build and deploy software applications on one or more public cloud providers (OCI, AWS, Azure, GCP, or similar). Experience designing API-first systems with application stacks like FARM and MERN, and technologies such as gRPC and REST. Solid understanding of Design Thinking, Test-Driven Development (TDD), BDD, and end-to-end SDLC. Experience in DevOps practices, including Kubernetes, CI/CD, Blue-Green, and Canary deployments. Experience with Micro-service architecture patterns, including API Gateways, Event-Driven & Reactive Architecture, CQRS, and SAGA. Hands-on experience working with various data types and storage formats, including NoSQL, SQL, Graph databases, and data serialization formats like Parquet and Arrow. Experience building Agentic Systems with SLMs and LLMs using frameworks like Langgraph + Langchain, AutoGen, LlamaIndex, and Haystack or equivalent. Experience in Data Engineering using data lakehouse stacks such as ETL/ELT, and data processing with Apache Hadoop, Spark, Flink, Beam, and dbt. Experience with Data Warehousing and Lakes such as Apache Iceberg, Hudi, Delta Lake, and cloud-managed solutions like OCI Data Lakehouse. Experience in data visualization and analytics with Apache Superset, Apache Zeppelin, Oracle Analytics Cloud or similar. Responsibilities Core Responsibilities include: Provide thought leadership, technology oversight and hands on development direction to the Development Teams across the business. Liaise with senior executives across multiple business lines to combine business requirements into technology work packages in alignment with the overall AI Strategy and Next-Gen Technology Stack. Lead the development of architecture patterns, integration with full stack software ecosystem, data engineering and contribute to the design strategy. Collaborate with product managers and development teams to identify software requirements and define project scopes. Develop and maintain technical documentation, including architecture diagrams, design specifications, and system diagrams. Analyze and recommend new software technologies and platforms to ensure the company stays ahead of the curve. Work with development teams to ensure software projects are delivered on time, within budget, and to the required quality standards. Provide guidance and mentorship to junior developers. Stay up-to-date with industry trends and developments in software architecture and development practices. Innovation and critical problem solving skills with exceptional communication skills are a must in this role as the Senior Architect would effectively act as a conduit between business executives, functional teams and technology engineering teams. The role requires very strong technology thought leadership skills with practical hands on knowledge along with influential skills to create a broader impact within the business and engineering functions. Qualifications Career Level - IC5 About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law. Show more Show less
Posted 4 weeks ago
0.0 years
0 Lacs
Mohali, Punjab
On-site
Senior Data Engineer (6-7 Years Experience minimum) Location: Mohali, Punjab (Full-Time, Onsite) Company: Data Couch Pvt. Ltd. About Data Couch Pvt. Ltd. Data Couch Pvt. Ltd. is a premier consulting and enterprise training company specializing in Data Engineering, Big Data, Cloud Technologies, DevOps, and AI/ML . With a strong presence across India and global client partnerships, we deliver impactful solutions and upskill teams across industries. Our expert consultants and trainers work with the latest technologies to empower digital transformation and data-driven decision-making for businesses. Technologies We Work With At Data Couch, you’ll gain exposure to a wide range of modern tools and technologies, including: Big Data: Apache Spark, Hadoop, Hive, HBase, Pig Cloud Platforms: AWS, GCP, Microsoft Azure Programming: Python, Scala, SQL, PySpark DevOps & Orchestration: Kubernetes, Docker, Jenkins, Terraform Data Engineering Tools: Apache Airflow, Kafka, Flink, NiFi Data Warehousing: Snowflake, Amazon Redshift, Google BigQuery Analytics & Visualization: Power BI, Tableau Machine Learning & MLOps: MLflow, Databricks, TensorFlow, PyTorch Version Control & CI/CD: Git, GitLab CI/CD, CircleCI Key Responsibilities Design, build, and maintain robust and scalable data pipelines using PySpark Leverage Hadoop ecosystem (HDFS, Hive, etc.) for big data processing Develop and deploy data workflows in cloud environments (AWS, GCP, or Azure) Use Kubernetes to manage and orchestrate containerized data services Collaborate with cross-functional teams to develop integrated data solutions Monitor and optimize data workflows for performance, reliability, and security Follow best practices for data governance , compliance, and documentation Must-Have Skills Proficiency in PySpark for ETL and data transformation tasks Hands-on experience with at least one cloud platform (AWS, GCP, or Azure) Strong grasp of Hadoop ecosystem tools such as HDFS, Hive, etc. Practical experience in Kubernetes for service orchestration Proficiency in Python and SQL Experience working with large-scale, distributed data systems Familiarity with tools like Apache Airflow , Kafka , or Databricks Experience working with data warehouses like Snowflake, Redshift, or BigQuery Exposure to MLOps or integration of AI/ML pipelines Understanding of CI/CD pipelines and DevOps practices for data workflows What We Offer Opportunity to work on cutting-edge data projects with global clients A collaborative, innovation-driven work culture Continuous learning via internal training, certifications, and mentorship Competitive compensation and growth opportunities Job Type: Full-time Pay: ₹1,200,000.00 - ₹15,000,000.00 per year Benefits: Health insurance Leave encashment Paid sick time Paid time off Schedule: Day shift Work Location: In person
Posted 4 weeks ago
0 - 10 years
0 Lacs
Bengaluru, Karnataka
Work from Office
Solution Architect (AI / Gen AI) At ABB, we are dedicated to addressing global challenges. Our core values: care, courage, curiosity, and collaboration - combined with a focus on diversity, inclusion, and equal opportunities - are key drivers in our aim to empower everyone to create sustainable solutions. Write the next chapter of your ABB story. This position reports to BL technology Manager, Digital Your role and responsibilities In this role, you will have the opportunity to initiate and drive technology, software, product, and/or solution development using in-depth technical expertise in a specific area. Each day, you will act as the first point of contact in Research and Development (R&D) for in-depth product or technology-related issues. You will also showcase your expertise by supporting strategic corporate technology management and future product/software/solution architecture. The work model for the role is: #LI Hybrid This role is contributing to the Process Automation Business of Process Industry Division in Bangalore . You will be mainly accountable for: • Architect and implement AI, ML, and Gen AI solutions to address critical industrial challenges. This includes designing data pipelines, developing machine learning models, and deploying scalable solutions • Partner with cross-functional teams, including data engineers, data scientists, software developers, and domain experts, to ensure seamless integration and deployment of AI solutions • Continuously research and integrate the latest advancements in AI, ML, and GenAI technologies into our solutions. Experiment with new algorithms, frameworks, and tools to enhance solution performance • Provide technical guidance and mentorship to junior team members. Lead code reviews, design discussions, and technical workshops Qualifications for the role Bachelors or Masters Degree in Computer Science, Software Engineering or equivalent Should have architected and designed AI / Gen AI / Web based software applications 8-10 years of experience with designing Software Architecture and overall software development experience of 15+ years. Experience with Implementing ML/AI/ GenAI Industrial software in one of these industries – Cement / Mining / Pulp & Paper / similar ones Proficiency in Python, R, Java /C++. Expertise in TensorFlow, PyTorch, Keras, Scikit-learn, and other relevant frameworks. Strong knowledge of data preprocessing, feature engineering, and data pipeline development using tools like Apache Airflow, Apache NiFi, and ETL processes More about us The Process Industries Division serves the mining, minerals processing, metals, cement, pulp and paper, battery manufacturing, and food and beverage, as well as their associated service industries. The Division brings deep industry domain expertise coupled with the ability to integrate both automation and electrical systems, increase productivity and reduce overall capital and operating costs for customers. For mining, metals and cement customers, solutions include specialized products and services, as well as total production systems. The Division designs, plans, engineers, supplies, installs and commissions integrated electrical and motion systems, including electric equipment, drives, motors, high power rectifiers and equipment for automation and supervisory control within a variety of areas including mineral handling, mining operations, aluminum smelting, hot and cold steel applications and cement production. The offering for the pulp and paper industries includes control systems, quality control systems, drive systems, on-line sensors, actuators and field instruments. Digitalization solutions, including collaborative operations and augmented reality, help improve plant and enterprise productivity, and reduce maintenance and energy costs. We value people from different backgrounds. Apply today for your next career step within ABB and visit www.abb.com to learn about the impact of our solutions across the globe. #MyABBStory "It has come to our attention that the name of ABB is being used for asking candidates to make payments for job opportunities (interviews, offers). Please be advised that ABB makes no such requests. All our open positions are made available on our career portal for all fitting the criteria to apply. ABB does not charge any fee whatsoever for recruitment process. Please do not make payments to any individuals / entities in connection to recruitment with ABB, even if is claimed that the money is refundable. ABB is not liable for such transactions. For current open positions you can visit our career website https://global.abb/group/en/careers and apply. Please refer to detailed recruitment fraud caution notice using the link https://global.abb/group/en/careers/how-to-apply/fraud-warning"
Posted 4 weeks ago
10 years
0 Lacs
Gurugram, Haryana, India
On-site
Line of Service Tax Industry/Sector Not Applicable Specialism Operations Management Level Manager Job Description & Summary At PwC, our people in tax services focus on providing advice and guidance to clients on tax planning, compliance, and strategy. These individuals help businesses navigate complex tax regulations and optimise their tax positions. In quantitative tax solutions and technologies at PwC, you will focus on leveraging data analytics and technology to develop innovative tax solutions. In this field, you will use quantitative methods to optimise tax processes and enhance decision-making for clients. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC , we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Job Description & Summary : As Sr. Data Engineer with experience of up to 10 years , you will be responsible for providing technical solution to Business problem in Datawarehouse and Data Lake. Responsibilities: You will be closely work ing with business team during requirement gathering and further collaborating with other teams to develop a solution. Hands-on designing, building, and maintaining Data Lake and Data Warehouse on AWS or Azure to support Data and AI/ML workloads. Should have a fair knowledge on Data storage, data security, data cataloging of data lake. Should have a fair knowledge on programming languages like Python, PySpark and SQL. Strong understanding of AWS cloud components like S3, Redshift, DMS, Glue, Athena, Airflow, EMR, NiFi and any ETL tool Should have experience of 8-10 years in data engineering and development. Should have experience of various data architecture patterns of Data Lake and Data warehouse Should have experience of various data modelling techniques, data design patterns, Star Schema etc. Should have exposure to internal working of various Reporting tools like Power BI, Quicksight etc. Ability to multitask across different tracks Proven experience in design, architecture review and Impact Analysis of Technical changes. Hands on knowledge on various cloud platform s like AWS . Proven experience in working with large teams Excellent presentation and communication skills Excellent Interpersonal Skills Highly self-motivated and eager to learn. Always watching out for new technologies and adopting appropriate ones for improving your productivity, as well as the quality & effectiveness of your deliverables. Should be able to do POCs on emerging technologies. Well versed with the emerging Technology Trends Working experience of Generative AI will be a big plus Any Data Engineer Certification would be an added advantage Mandatory skill sets: Data Engineer Preferred skill sets: Data Engineer Years of experience required : 8 to 1 5 Yrs Education qualification: B.Tech / M.Tech Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor Degree, Master Degree Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills Sales Taxes Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Analytical Thinking, Coaching and Feedback, Communication, Corporate Tax Planning, Creativity, Data Analytics, Data Visualization, Embracing Change, Emotional Regulation, Empathy, Inclusion, Intellectual Curiosity, Learning Agility, Optimism, Professional Courage, Relationship Building, Scenario Planning, Self-Awareness, Service Excellence, Statistical Analysis, Statistical Theory, Strategic Questioning {+ 6 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date Show more Show less
Posted 1 month ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Role : Scala Developer Location : Pune Experiences: 6-8 Yrs Responsibilities · Strong People management, leadership, organizational skills. Outstanding communication (written and verbal) skills. · Experience leveraging open-source tools, predictive analytics, machine learning, Advanced Statistics, and other data techniques to perform basic analyses. · Demonstrated basic knowledge of statistical analytical techniques, coding, and data engineering. · Experience developing and configuring dashboards is a plus. · Demonstrated judgement when escalating issues to the project team. · High proficiency in Python/Spark, Hadoop platforms & tools (Hive, Impala, Airflow, NiFi), SQL. · Curiosity, creativity, and excitement for technology and innovation. · Demonstrated quantitative and problem-solving abilities. · Expert proficiency in using Python/Scala, Spark (tuning jobs), SQL, Hadoop platforms to build Big Data products & platforms. · Comfortable in developing shell scripts for automation. · Proficient in standard software development, such as version control, testing, and deployment. · Experience with visualization tools like tableau, looker. · At least 5 years leading collaborative work in complex engineering projects in an Agile setting e.g. Scrum. · Extensive data warehousing/data lake development experience with strong data modelling and data integration experience. · Good SQL and higher-level programming languages with solid knowledge of data mining, machine learning algorithms and tools. · Strong hands-on experience in Analytics & Computer Science. · Demonstrated basic knowledge of statistical analytical techniques, coding, and data engineering. · Experience in building and deploying production-level data-driven applications and data processing workflows/pipelines and/or implementing machine learning systems at scale in Java, Scala, or Python and deliver analytics involving all phases like data ingestion, feature engineering, modelling, tuning, evaluating, monitoring, and presenting. · At least 10 years of relevant hands-on experience as a Data Engineer in an individual contributor capacity. · Able to lead the implementation of machine learning production systems. · Demonstrated ability, through hands-on experience, to develop production machine learning pipelines. Note : Interested candidates please share Resume veenab@innovativeinfotech.com Show more Show less
Posted 1 month ago
5 - 8 years
0 Lacs
Vadodara, Gujarat, India
Hybrid
Note: If shortlisted, we’ll contact you via WhatsApp and email. Please check both and respond promptly. Work Mode: Hybrid (3 days onsite/week) Location: Vadodara, Gujarat-390003 Shift: US Timings – EST / CST Salary: INR 1500000 - 2500000 About The Role We are seeking a highly skilled Senior Java Developer – Microservices to join our dynamic team and contribute to the modernization of enterprise applications. You will play a key role in transforming a monolithic architecture into a scalable, microservices-driven ecosystem. Key Responsibilities Design, develop, and maintain Java-based applications using Spring Boot Refactor legacy systems into microservices architecture Work on private or hybrid cloud infrastructure ensuring scalability and compliance Integrate with Apache Kafka for real-time streaming and message processing Use Oracle databases for transactional data management and support data migration via Apache NiFi Collaborate in an Agile team environment Optimize application performance and conduct root-cause analysis Maintain technical documentation and enforce coding standards Required Skills And Qualifications 5+ years experience in Java (8 or higher) development Strong expertise with Spring Boot Hands-on with Apache Kafka and inter-service communication Proficient in Oracle Database for transactional systems Experience with private/hybrid cloud infrastructure and security Strong analytical and problem-solving abilities Excellent communication and time management skills Must-Haves Min Bachelor's / Master's in Computer Science or related field No job gaps or frequent job changes (must have good stability) Notice Period: Immediate to 30 days
Posted 1 month ago
5 - 8 years
0 Lacs
Vadodara, Gujarat, India
Hybrid
Note: If shortlisted, we’ll contact you via WhatsApp and email. Please check both and respond promptly. Work Mode: Hybrid (3 days onsite/week) Location: Vadodara, Gujarat-390003 Shift: US Timings – EST / CST Salary: INR 1500000 - 2500000 About The Role We are seeking a highly skilled Senior Java Developer – Microservices to join our dynamic team and contribute to the modernization of enterprise applications. You will play a key role in transforming a monolithic architecture into a scalable, microservices-driven ecosystem. Key Responsibilities Design, develop, and maintain Java-based applications using Spring Boot Refactor legacy systems into microservices architecture Work on private or hybrid cloud infrastructure ensuring scalability and compliance Integrate with Apache Kafka for real-time streaming and message processing Use Oracle databases for transactional data management and support data migration via Apache NiFi Collaborate in an Agile team environment Optimize application performance and conduct root-cause analysis Maintain technical documentation and enforce coding standards Required Skills And Qualifications 5+ years experience in Java (8 or higher) development Strong expertise with Spring Boot Hands-on with Apache Kafka and inter-service communication Proficient in Oracle Database for transactional systems Experience with private/hybrid cloud infrastructure and security Strong analytical and problem-solving abilities Excellent communication and time management skills Must-Haves Min Bachelor's / Master's in Computer Science or related field No job gaps or frequent job changes (must have good stability) Notice Period: Immediate to 30 days
Posted 1 month ago
5 - 8 years
0 Lacs
Vapi, Gujarat, India
On-site
OverviewWe are seeking a highly skilled Senior Data Engineer to lead the design, development, and optimization of large-scale data infrastructure, pipelines, and platforms. This role requires expertise across the full spectrum of data engineering, including cloud and on-premise systems, real-time streaming, orchestration frameworks, and robust data governance practices.You will be a key contributor in shaping the organization’s data ecosystem, enabling analytics, machine learning, and real-time decision-making at scale. This position demands not just competence—but mastery—of the core disciplines and tools in modern data engineering. ResponsibilitiesData Platform ArchitectureDesign, implement, and manage hybrid data platforms across on-premise and cloud environments.Build scalable and reliable data lake and warehouse solutions using best-in-class storage formats and compute frameworks.Collaborate with cross-functional teams to define data architecture and technology strategy aligned with business objectives.Ingestion and OrchestrationDevelop and maintain robust data ingestion workflows using tools such as Apache NiFi, Kafka, and custom connectors.Implement data orchestration pipelines with tools like Dagster, Airflow, or Prefect for both batch and streaming data.Build modular, maintainable workflows that adhere to best practices in monitoring, error handling, and retries.ETL/ELT Pipeline DevelopmentDesign and optimize ETL/ELT pipelines that process data at scale from multiple systems into analytical environments.Ensure data workflows are highly performant, idempotent, and compliant with SLAs.Use Spark, dbt, or custom code to transform, enrich, and validate data.Data Modeling and WarehousingCreate and maintain normalized and denormalized schemas for analytical workloads using star and snowflake models.Work with cloud and on-premise databases and warehouses including PostgreSQL, Redshift, BigQuery, Snowflake, and Hive.Define partitioning, bucketing, and indexing strategies to ensure query efficiency.Infrastructure and DevOpsDeploy and maintain infrastructure using Terraform, Ansible, or shell scripting for both cloud and on-premise systems.Implement CI/CD pipelines for data services using Jenkins, GitLab CI, or similar tools.Utilize Docker and optionally Kubernetes to package and manage data applications.Data Governance and QualityDefine and enforce data quality policies using tools like Great Expectations or Deequ.Establish lineage and metadata tracking through solutions like Apache Atlas, Amundsen, or Collibra.Implement access control, encryption, and audit policies to ensure data security and compliance.Monitoring and OptimizationMonitor pipeline health, job performance, and system metrics using Prometheus, Grafana, or ELK.Continuously optimize workflows and queries to minimize cost and latency.Perform root cause analysis and troubleshooting for data issues in production systems.Collaboration and LeadershipMentor junior and mid-level data engineers, participate in technical reviews, and help define team standards.Work closely with data scientists, analysts, software engineers, and product managers to gather requirements and translate them into robust data solutions.Promote a culture of high quality, documentation, reusability, and operational excellence. QualificationsRequiredBachelor’s or Master’s degree in Computer Science, Engineering, or related field.At least 5 years of experience as a data engineer, with expertise in both on-premise and cloud environments.Deep experience with Apache NiFi, Dagster, and orchestration frameworks such as Airflow or Prefect.Proficiency in Python, SQL, and optionally Scala or Java.Strong understanding of distributed systems, including Hadoop, Spark, and Kafka.Demonstrated experience building secure, scalable, and maintainable data pipelines and infrastructure.Familiarity with modern data stack tools and infrastructure automation.PreferredExperience with real-time data processing and CDC pipelines.Exposure to regulatory and high-security environments (e.g., healthcare, finance, industrial systems).Certifications in AWS, GCP, or Azure for data engineering or analytics.Contributions to open-source data tooling or internal platform development. What We OfferA high-impact role in a data-driven organization focused on innovation and scalability.Flexible working environment and strong support for personal development.Competitive compensation and benefits, including performance-based incentives.Opportunities to work on high-visibility projects and influence enterprise data architecture. How to Apply Please share your updated CV along with the following details:Current CTCExpected CTCNotice Period Email to: jignesh.pandoriya@merillife.com
Posted 1 month ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
About The Role OSTTRA India The Role: Enterprise Architect - Integration The Team: The OSTTRA Technology team is composed of Capital Markets Technology professionals, who build, support and protect the applications that operate our network. The technology landscape includes high-performance, high-volume applications as well as compute intensive applications, leveraging contemporary microservices, cloud-based architectures. The Impact: Together, we build, support, protect and manage high-performance, resilient platforms that process more than 100 million messages a day. Our services are vital to automated trade processing around the globe, managing peak volumes and working with our customers and regulators to ensure the efficient settlement of trades and effective operation of global capital markets. What’s in it for you: The current objective is to identify individuals with 16+ years of experience who have high expertise, to join their existing team of experts who are spread across the world. This is your opportunity to start at the beginning and get the advantages of rapid early growth. This role is based out in Gurgaon and expected to work with different teams and colleagues across the globe. This is an excellent opportunity to be part of a team based out of Gurgaon and to work with colleagues across multiple regions globally. Responsibilities The role shall be responsible for establishing, maintaining, socialising and realising the target state integration strategy for FX & Securities Post trade businesses of Osttra. This shall encompass the post trade lifecycle of our businesses including connectivity with clients, markets ecosystem and Osttra’s post trade family of networks and platforms and products. The role shall partner with product architects, product managers, delivery heads and teams for refactoring the deliveries towards the target state. They shall be responsible for the efficiency, optimisation, oversight and troubleshooting of current day integration solutions, platforms and deliveries as well, in addition target state focus. The role shall be expected to produce and maintain integration architecture blueprint. This shall cover current state and propose a rationalised view of target state of end-to-end integration flows and patterns. The role shall also provide for and enable the needed technology platforms/tools and engineering methods to realise the strategy. The role enable standardisation of protocols / formats (at least within Osttra world) , tools and reduce the duplication & non differentiated heavy lift in systems. The role shall enable the documentation of flows & capture of standard message models. Integration strategy shall also include transformation strategy which is so vital in a multi-lateral / party / system post trade world. Role shall partner with other architects and strategies / programmes and enable the demands of UI, application, and data strategies. What We’re Looking For Rich domain experience of financial services industry preferably with financial markets, Pre/post trade life cycles and large-scale Buy/Sell/Brokerage organisations Should have experience of leading the integration strategies and delivering the integration design and architecture for complex programmes and financial enterprises catering to key variances of latency / throughput. Experience with API Management platforms (like AWS API Gateway, Apigee, Kong, MuleSoft Anypoint) and key management concepts (API lifecycle management, versioning strategies, developer portals, rate limiting, policy enforcement) Should be adept with integration & transformation methods, technologies and tools. Should have experience of domain modelling for messages / events / streams and APIs. Rich experience of architectural patterns like Event driven architectures, micro services, event streaming, Message processing/orchestrations, CQRS, Event sourcing etc. Experience of protocols or integration technologies like FIX, Swift, MQ, FTP, API etc. .. including knowledge of authentication patterns (OAuth, mTLS, JWT, API Keys), authorization mechanisms, data encryption (in transit and at rest), secrets management, and security best practices Experience of messaging formats and paradigms like XSD, XML, XSLT, JSON, Protobuf, REST, gRPC, GraphQL etc … Experience of technology like Kafka or AWS Kinesis, Spark streams, Kubernetes / EKS, AWS EMR Experience of languages like Java, python and message orchestration frameworks like Apache Camel, Apache Nifi, AWS Step Functions etc. Experience in designing and implementing traceability/observability strategies for integration systems and familiarity with relevant framework tooling. Experience of engineering methods like CI/CD, build deploy automation, Infra as code and integration testing methods and tools Should have appetite to review / code for complex problems and should find interests / energy in doing design discussions and reviews. Experience and strong understanding of multicloud integration patterns. The Location: Gurgaon, India About Company Statement OSTTRA is a market leader in derivatives post-trade processing, bringing innovation, expertise, processes and networks together to solve the post-trade challenges of global financial markets. OSTTRA operates cross-asset post-trade processing networks, providing a proven suite of Credit Risk, Trade Workflow and Optimisation services. Together these solutions streamline post-trade workflows, enabling firms to connect to counterparties and utilities, manage credit risk, reduce operational risk and optimise processing to drive post-trade efficiencies. OSTTRA was formed in 2021 through the combination of four businesses that have been at the heart of post trade evolution and innovation for the last 20+ years: MarkitServ, Traiana, TriOptima and Reset. These businesses have an exemplary track record of developing and supporting critical market infrastructure and bring together an established community of market participants comprising all trading relationships and paradigms, connected using powerful integration and transformation capabilities. About OSTTRA Candidates should note that OSTTRA is an independent firm, jointly owned by S&P Global and CME Group. As part of the joint venture, S&P Global provides recruitment services to OSTTRA - however, successful candidates will be interviewed and directly employed by OSTTRA, joining our global team of more than 1,200 post trade experts. OSTTRA was formed in 2021 through the combination of four businesses that have been at the heart of post trade evolution and innovation for the last 20+ years: MarkitServ, Traiana, TriOptima and Reset. OSTTRA is a joint venture, owned 50/50 by S&P Global and CME Group. With an outstanding track record of developing and supporting critical market infrastructure, our combined network connects thousands of market participants to streamline end to end workflows - from trade capture at the point of execution, through portfolio optimization, to clearing and settlement. Joining the OSTTRA team is a unique opportunity to help build a bold new business with an outstanding heritage in financial technology, playing a central role in supporting global financial markets. Learn more at www.osttra.com. What’s In It For You? Benefits We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our Benefits Include Health & Wellness: Health care coverage designed for the mind and body.Flexible Downtime: Generous time off helps keep you energized for your time on.Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills.Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs.Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families.Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf 20 - Professional (EEO-2 Job Categories-United States of America), BSMGMT203 - Entry Professional (EEO Job Group) Job ID: 315309 Posted On: 2025-05-12 Location: Gurgaon, Haryana, India
Posted 1 month ago
20 years
0 Lacs
Gurugram, Haryana
Work from Office
About the Role: OSTTRA India The Role: Enterprise Architect - Integration The Team: The OSTTRA Technology team is composed of Capital Markets Technology professionals, who build, support and protect the applications that operate our network. The technology landscape includes high-performance, high-volume applications as well as compute intensive applications, leveraging contemporary microservices, cloud-based architectures. The Impact: Together, we build, support, protect and manage high-performance, resilient platforms that process more than 100 million messages a day. Our services are vital to automated trade processing around the globe, managing peak volumes and working with our customers and regulators to ensure the efficient settlement of trades and effective operation of global capital markets. What’s in it for you: The current objective is to identify individuals with 16+ years of experience who have high expertise, to join their existing team of experts who are spread across the world. This is your opportunity to start at the beginning and get the advantages of rapid early growth. This role is based out in Gurgaon and expected to work with different teams and colleagues across the globe. This is an excellent opportunity to be part of a team based out of Gurgaon and to work with colleagues across multiple regions globally. Responsibilities: The role shall be responsible for establishing, maintaining, socialising and realising the target state integration strategy for FX & Securities Post trade businesses of Osttra. This shall encompass the post trade lifecycle of our businesses including connectivity with clients, markets ecosystem and Osttra’s post trade family of networks and platforms and products. The role shall partner with product architects, product managers, delivery heads and teams for refactoring the deliveries towards the target state. They shall be responsible for the efficiency, optimisation, oversight and troubleshooting of current day integration solutions, platforms and deliveries as well, in addition target state focus. The role shall be expected to produce and maintain integration architecture blueprint. This shall cover current state and propose a rationalised view of target state of end-to-end integration flows and patterns. The role shall also provide for and enable the needed technology platforms/tools and engineering methods to realise the strategy. The role enable standardisation of protocols / formats (at least within Osttra world) , tools and reduce the duplication & non differentiated heavy lift in systems. The role shall enable the documentation of flows & capture of standard message models. Integration strategy shall also include transformation strategy which is so vital in a multi-lateral / party / system post trade world. Role shall partner with other architects and strategies / programmes and enable the demands of UI, application, and data strategies. What We’re Looking For: Rich domain experience of financial services industry preferably with financial markets, Pre/post trade life cycles and large-scale Buy/Sell/Brokerage organisations Should have experience of leading the integration strategies and delivering the integration design and architecture for complex programmes and financial enterprises catering to key variances of latency / throughput. Experience with API Management platforms (like AWS API Gateway, Apigee, Kong, MuleSoft Anypoint) and key management concepts (API lifecycle management, versioning strategies, developer portals, rate limiting, policy enforcement) Should be adept with integration & transformation methods, technologies and tools. Should have experience of domain modelling for messages / events / streams and APIs. Rich experience of architectural patterns like Event driven architectures, micro services, event streaming, Message processing/orchestrations, CQRS, Event sourcing etc. Experience of protocols or integration technologies like FIX, Swift, MQ, FTP, API etc. .. including knowledge of authentication patterns (OAuth, mTLS, JWT, API Keys), authorization mechanisms, data encryption (in transit and at rest), secrets management, and security best practices Experience of messaging formats and paradigms like XSD, XML, XSLT, JSON, Protobuf, REST, gRPC, GraphQL etc … Experience of technology like Kafka or AWS Kinesis, Spark streams, Kubernetes / EKS, AWS EMR Experience of languages like Java, python and message orchestration frameworks like Apache Camel, Apache Nifi, AWS Step Functions etc. Experience in designing and implementing traceability/observability strategies for integration systems and familiarity with relevant framework tooling. Experience of engineering methods like CI/CD, build deploy automation, Infra as code and integration testing methods and tools Should have appetite to review / code for complex problems and should find interests / energy in doing design discussions and reviews. Experience and strong understanding of multicloud integration patterns. The Location: Gurgaon, India About Company Statement: OSTTRA is a market leader in derivatives post-trade processing, bringing innovation, expertise, processes and networks together to solve the post-trade challenges of global financial markets. OSTTRA operates cross-asset post-trade processing networks, providing a proven suite of Credit Risk, Trade Workflow and Optimisation services. Together these solutions streamline post-trade workflows, enabling firms to connect to counterparties and utilities, manage credit risk, reduce operational risk and optimise processing to drive post-trade efficiencies. OSTTRA was formed in 2021 through the combination of four businesses that have been at the heart of post trade evolution and innovation for the last 20+ years: MarkitServ, Traiana, TriOptima and Reset. These businesses have an exemplary track record of developing and supporting critical market infrastructure and bring together an established community of market participants comprising all trading relationships and paradigms, connected using powerful integration and transformation capabilities. About OSTTRA Candidates should note that OSTTRA is an independent firm, jointly owned by S&P Global and CME Group. As part of the joint venture, S&P Global provides recruitment services to OSTTRA - however, successful candidates will be interviewed and directly employed by OSTTRA, joining our global team of more than 1,200 post trade experts. OSTTRA was formed in 2021 through the combination of four businesses that have been at the heart of post trade evolution and innovation for the last 20+ years: MarkitServ, Traiana, TriOptima and Reset. OSTTRA is a joint venture, owned 50/50 by S&P Global and CME Group. With an outstanding track record of developing and supporting critical market infrastructure, our combined network connects thousands of market participants to streamline end to end workflows - from trade capture at the point of execution, through portfolio optimization, to clearing and settlement. Joining the OSTTRA team is a unique opportunity to help build a bold new business with an outstanding heritage in financial technology, playing a central role in supporting global financial markets. Learn more at www.osttra.com . What’s In It For You? Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ----------------------------------------------------------- 20 - Professional (EEO-2 Job Categories-United States of America), BSMGMT203 - Entry Professional (EEO Job Group) Job ID: 315309 Posted On: 2025-05-12 Location: Gurgaon, Haryana, India
Posted 1 month ago
20.0 years
0 Lacs
Gurugram, Haryana
On-site
Enterprise Architect - Integration Gurgaon, India Business Management 315309 Job Description About The Role: OSTTRA India The Role: Enterprise Architect - Integration The Team: The OSTTRA Technology team is composed of Capital Markets Technology professionals, who build, support and protect the applications that operate our network. The technology landscape includes high-performance, high-volume applications as well as compute intensive applications, leveraging contemporary microservices, cloud-based architectures. The Impact: Together, we build, support, protect and manage high-performance, resilient platforms that process more than 100 million messages a day. Our services are vital to automated trade processing around the globe, managing peak volumes and working with our customers and regulators to ensure the efficient settlement of trades and effective operation of global capital markets. What’s in it for you: The current objective is to identify individuals with 16+ years of experience who have high expertise, to join their existing team of experts who are spread across the world. This is your opportunity to start at the beginning and get the advantages of rapid early growth. This role is based out in Gurgaon and expected to work with different teams and colleagues across the globe. This is an excellent opportunity to be part of a team based out of Gurgaon and to work with colleagues across multiple regions globally. Responsibilities: The role shall be responsible for establishing, maintaining, socialising and realising the target state integration strategy for FX & Securities Post trade businesses of Osttra. This shall encompass the post trade lifecycle of our businesses including connectivity with clients, markets ecosystem and Osttra’s post trade family of networks and platforms and products. The role shall partner with product architects, product managers, delivery heads and teams for refactoring the deliveries towards the target state. They shall be responsible for the efficiency, optimisation, oversight and troubleshooting of current day integration solutions, platforms and deliveries as well, in addition target state focus. The role shall be expected to produce and maintain integration architecture blueprint. This shall cover current state and propose a rationalised view of target state of end-to-end integration flows and patterns. The role shall also provide for and enable the needed technology platforms/tools and engineering methods to realise the strategy. The role enable standardisation of protocols / formats (at least within Osttra world) , tools and reduce the duplication & non differentiated heavy lift in systems. The role shall enable the documentation of flows & capture of standard message models. Integration strategy shall also include transformation strategy which is so vital in a multi-lateral / party / system post trade world. Role shall partner with other architects and strategies / programmes and enable the demands of UI, application, and data strategies. What We’re Looking For: Rich domain experience of financial services industry preferably with financial markets, Pre/post trade life cycles and large-scale Buy/Sell/Brokerage organisations Should have experience of leading the integration strategies and delivering the integration design and architecture for complex programmes and financial enterprises catering to key variances of latency / throughput. Experience with API Management platforms (like AWS API Gateway, Apigee, Kong, MuleSoft Anypoint) and key management concepts (API lifecycle management, versioning strategies, developer portals, rate limiting, policy enforcement) Should be adept with integration & transformation methods, technologies and tools. Should have experience of domain modelling for messages / events / streams and APIs. Rich experience of architectural patterns like Event driven architectures, micro services, event streaming, Message processing/orchestrations, CQRS, Event sourcing etc. Experience of protocols or integration technologies like FIX, Swift, MQ, FTP, API etc. .. including knowledge of authentication patterns (OAuth, mTLS, JWT, API Keys), authorization mechanisms, data encryption (in transit and at rest), secrets management, and security best practices Experience of messaging formats and paradigms like XSD, XML, XSLT, JSON, Protobuf, REST, gRPC, GraphQL etc … Experience of technology like Kafka or AWS Kinesis, Spark streams, Kubernetes / EKS, AWS EMR Experience of languages like Java, python and message orchestration frameworks like Apache Camel, Apache Nifi, AWS Step Functions etc. Experience in designing and implementing traceability/observability strategies for integration systems and familiarity with relevant framework tooling. Experience of engineering methods like CI/CD, build deploy automation, Infra as code and integration testing methods and tools Should have appetite to review / code for complex problems and should find interests / energy in doing design discussions and reviews. Experience and strong understanding of multicloud integration patterns. The Location: Gurgaon, India About Company Statement: OSTTRA is a market leader in derivatives post-trade processing, bringing innovation, expertise, processes and networks together to solve the post-trade challenges of global financial markets. OSTTRA operates cross-asset post-trade processing networks, providing a proven suite of Credit Risk, Trade Workflow and Optimisation services. Together these solutions streamline post-trade workflows, enabling firms to connect to counterparties and utilities, manage credit risk, reduce operational risk and optimise processing to drive post-trade efficiencies. OSTTRA was formed in 2021 through the combination of four businesses that have been at the heart of post trade evolution and innovation for the last 20+ years: MarkitServ, Traiana, TriOptima and Reset. These businesses have an exemplary track record of developing and supporting critical market infrastructure and bring together an established community of market participants comprising all trading relationships and paradigms, connected using powerful integration and transformation capabilities. About OSTTRA Candidates should note that OSTTRA is an independent firm, jointly owned by S&P Global and CME Group. As part of the joint venture, S&P Global provides recruitment services to OSTTRA - however, successful candidates will be interviewed and directly employed by OSTTRA, joining our global team of more than 1,200 post trade experts. OSTTRA was formed in 2021 through the combination of four businesses that have been at the heart of post trade evolution and innovation for the last 20+ years: MarkitServ, Traiana, TriOptima and Reset. OSTTRA is a joint venture, owned 50/50 by S&P Global and CME Group. With an outstanding track record of developing and supporting critical market infrastructure, our combined network connects thousands of market participants to streamline end to end workflows - from trade capture at the point of execution, through portfolio optimization, to clearing and settlement. Joining the OSTTRA team is a unique opportunity to help build a bold new business with an outstanding heritage in financial technology, playing a central role in supporting global financial markets. Learn more at www.osttra.com. What’s In It For You? Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries - Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf - 20 - Professional (EEO-2 Job Categories-United States of America), BSMGMT203 - Entry Professional (EEO Job Group) Job ID: 315309 Posted On: 2025-05-12 Location: Gurgaon, Haryana, India
Posted 1 month ago
12 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
About the Company We are Mindsprint! A leading-edge technology and business services firm that provides impact driven solutions to businesses, enabling them outpace speed of change. For over three decades we have been accelerating technology transformation for the Olam Group and their large base of global clients. Working with leading technologies and empowered with the freedom to create new solutions and better existing ones, we have been inspiring businesses with pioneering initiatives. Awards bagged in the recent years: Great Place To Work® Certified™ for 2023-2024Best Shared Services in India Award by Shared Services Forum – 2019Asia’s No.1 Shared Services in Process Improvement and Value Creation by Shared Services and Outsourcing Network Forum – 2019International Innovation Award for Best Services and Solutions – 2019Kincentric Best Employer India – 2020Creative Talent Management Impact Award – SSON Impact Awards 2021The Economic Times Best Workplaces for Women – 2021 & 2022#SSFExcellenceAward for Delivering Business Impact through Innovative People Practices – 2022 For more info: https://www.mindsprint.org/ Follow us in LinkedIn: Mindsprint Position : Associate Director Responsibilities Lead, mentor, and manage the Data Architects, Apps DBA, and DB Operations teams.Possess strong experience and deep understanding of major RDBMS, NoSQL, and Big Data technologies, with expertise in system design and advanced troubleshooting in high-pressure production environments.Core technologies include SQL Server, PostgreSQL, MySQL, TigerGraph, Neo4J, Elastic Search, ETL concepts, and high-level understanding on data warehouse platforms such as Snowflake, ClickHouse, etc.Define, validate, and implement robust data models and database solutions for clients across sectors such as Agriculture, Supply Chain, and Life Sciences.Oversee end-to-end database resource provisioning in the cloud, primarily on Azure, covering IaaS, PaaS, and SaaS models, along with proactive cost management and optimization.Hands-on expertise in data migration strategies between on-premises and cloud environments, ensuring minimal downtime and secure transitions.Experienced in database performance tuning, identifying and resolving SQL code bottlenecks, code review, optimization for high throughput, and regular database maintenance including defragmentation.Solid understanding of High Availability (HA) and Disaster Recovery (DR) solutions, with experience in setting up failover setup, replication, backup, and recovery strategies.Expertise in implementing secure data protection measures such as encryption (at rest and in transit), data masking, access controls, DLP strategies, and ensuring regulatory compliance with GDPR, PII, PCI-DSS, HIPAA, etc.Skilled in managing data integration, data movement, and data report pipelines using tools like Azure Data Factory (ADF), Apache NiFi, and Talend.Fair understanding of database internals, storage engines, indexing strategies, and partitioning for optimal resource and performance management.Strong knowledge in Master Data Management (MDM), data cataloging, metadata management, and building comprehensive data lineage frameworks.Proven experience in implementing monitoring and alerting systems for database health and capacity planning using tools like Azure Monitor, Grafana, or custom scripts.Exposure to DevOps practices for database management, including CI/CD pipelines for database deployments, version control of database schemas, and Infrastructure as Code (IaC) practices (e.g., Terraform, ARM templates).Experience collaborating with data analytics teams to provision optimized environments as data’s are shared between RDBMS, NoSQL and Snowflake Layers.Knowledge of security best practices for multi-tenant database environments and data segmentation strategies.Ability to guide the evolution of data governance frameworks, defining policies, standards, and best practices for database environments. Job Location : ChennaiNotice period :15 Days / Immediate / Currently Serving Notice period - Max 30 DaysShift : Day ShiftExperience : Min 12 YearsWork Mode : HybridGrade : D1 Associate Director
Posted 1 month ago
3 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Description Senior Associate / Manager ? Nifi Developer Job Location: Pan India Candidate should possess 8 to 12years of experience where 3+ years? should be relevant. Roles & Responsibilities: ? Design, develop, and manage data pipelines using Apache NiFi ? Integrate with systems like Kafka, HDFS, Hive, Spark, and RDBMS ? Monitor, troubleshoot, and optimize data flows ? Ensure data quality, reliability, and security ? Work with cross-functional teams to gather requirements and deliver data solutions Skills Required: ? Strong hands-on experience with Apache NiFi ? Knowledge of data ingestion, streaming, and batch processing ? Experience with Linux, shell scripting, and cloud environments (AWS/GCP is a plus) ? Familiarity with REST APIs, JSON/XML, and data transformation Skills Required RoleNifi Developer - SA/M Industry TypeIT/ Computers - Software Functional AreaIT-Software Required EducationAny Graduates Employment TypeFull Time, Permanent Key Skills NIFI DEVELOPER DESIGN DEVELOPMENT Other Information Job CodeGO/JC/21435/2025 Recruiter NameSPriya
Posted 1 month ago
5 - 8 years
0 Lacs
Pune, Maharashtra, India
Hybrid
About Company: Our client is prominent Indian multinational corporation specializing in information technology (IT), consulting, and business process services and its headquartered in Bengaluru with revenues of gross revenue of ₹222.1 billion with global work force of 234,054 and listed in NASDAQ and it operates in over 60 countries and serves clients across various industries, including financial services, healthcare, manufacturing, retail, and telecommunications. The company consolidated its cloud, data, analytics, AI, and related businesses under the tech services business line. Major delivery centers in India, including cities like Chennai, Pune, Hyderabad, and Bengaluru, kochi, kolkatta, Noida. · Job Title: ETL Testing + Java + API Testing + UNIX Commands · Location: Pune [Kharadi] (Hybrid) · Experience: 7 + yrs [With 7+ Relevant Experience in ETL Testing] · Job Type : Contract to hire. · Notice Period:- Immediate joiners. Mandatory Skills : ETL Testing, Java, API Testing, UINX Commands Job Description: 1) Key Responsibilities: ! Extensive experience in validating ETL processes, ensuring accurate data extraction, transformation, and loading across multiple environments. ! Proficient in Java programming, with the ability to understand and write Java code when required. ! Advanced skills in SQL for data validation, querying databases, and ensuring data consistency and integrity throughout the ETL process. ! Expertise in utilizing Unix commands to manage test environments, handle file systems, and execute system-level tasks. ! Proficient in creating shell scripts to automate testing processes, enhancing productivity and reducing manual intervention. ! Ensuring that data transformations and loads are accurate, with strong attention to identifying and resolving discrepancies in the ETL process. ! Focused on automating repetitive tasks and optimizing testing workflows to increase overall testing efficiency. ! Write and execute automated test scripts using Java to ensure the quality and functionality of ETL solutions. ! Utilize Unix commands and shell scripting to automate repetitive tasks and manage system processes. ! Collaborate with cross-functional teams, including data engineers, developers, and business analysts, to ensure the ETL processes meet business requirements. ! Ensure that data transformations, integrations, and pipelines are robust, secure, and efficient. ! Troubleshoot data discrepancies and perform root cause analysis for failed data loads. ! Create comprehensive test cases, execute them, and document test results for all data flows. ! Actively participate in the continuous improvement of ETL testing processes and methodologies. ! Experience with version control systems (e.g., Git) and integrating testing into CI/CD pipelines. 2) Tools & Technologies (Good to Have): # Experience with Hadoop ecosystem tools such as HDFS, MapReduce, Hive, and Spark for handling large-scale data processing and storage. # Knowledge of NiFi for automating data flows, transforming data, and integrating different systems seamlessly. # Experience with tools like Postman, SoapUI, or RestAssured to validate REST and SOAP APIs, ensuring correct data exchange and handling of errors.
Posted 1 month ago
5 - 8 years
0 Lacs
Pune, Maharashtra, India
Hybrid
Key Result Areas And Activities ETL Pipeline Development and Maintenance Design, develop, and maintain ETL pipelines using Cloudera tools such as Apache NiFi, Apache Flume, and Apache Spark. Create and maintain comprehensive documentation for data pipelines, configurations, and processes. Data Integration and Processing Integrate and process data from diverse sources including relational databases, NoSQL databases, and external APIs. Performance Optimization Optimize performance and scalability of Hadoop components (HDFS, YARN, MapReduce, Hive, Spark) to ensure efficient data processing. Identify and resolve issues related to data pipelines, system performance, and data integrity. Data Quality and Transformation Implement data quality checks and manage data transformation processes to ensure accuracy and consistency. Data Security and Compliance Apply data security measures and ensure compliance with data governance policies and regulatory requirements. Essential Skills Proficiency in Cloudera Data Platform (CDP) - Cloudera Data Engineering. Proven track record of successful data lake implementations and pipeline development. Knowledge of data lakehouse architectures and their implementation. Hands-on experience with Apache Spark and Apache Airflow within the Cloudera ecosystem. Proficiency in programming languages such as Python, Java, Scala, and Shell. Exposure to containerization technologies (e.g., Docker, Kubernetes) and system-level understanding of data structures, algorithms, distributed storage, and compute. Desirable Skills Experience with other CDP services like Dataflow, Stream Processing Familiarity with cloud environments such as AWS, Azure, or Google Cloud Platform Understanding of data governance and data quality principles CCP Data Engineer Certified Qualifications 7+ years of experience in Cloudera/Hadoop/Big Data engineering or related roles Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field Qualities Can influence and implement change; demonstrates confidence, strength of conviction and sound decisions. Believes in head-on dealing with a problem; approaches in logical and systematic manner; is persistent and patient; can independently tackle the problem, is not over-critical of the factors that led to a problem and is practical about it; follow up with developers on related issues. Able to consult, write, and present persuasively. Able to work in a self-organized and cross-functional team. Able to iterate based on new information, peer reviews, and feedback. Able to work seamlessly with clients across multiple geographies. Research focused mindset. Proficiency in English (read/write/speak) and communication over email. Excellent analytical, presentation, reporting, documentation, and interactive skills.
Posted 1 month ago
6 - 11 years
18 - 30 Lacs
Gurugram
Work from Office
Application layer technologies including Tomcat/Nodejs, Netty, Springboot, hibernate, Elasticsearch, Kafka, Apache flink Frontend technologies including ReactJs, Angular, Android/IOS Data storage technologies like Oracle, S3, Postgres, Mongodb
Posted 1 month ago
- 2 years
3 - 8 Lacs
Lucknow
Hybrid
Develop and maintain scalable data pipelines. Collaborate with data scientists and analysts to support business needs. Work with cloud platforms like AWS, Azure, or Google Cloud. Effectively working with cross-functional teams. Data Modelling.
Posted 1 month ago
5 - 8 years
0 Lacs
Pune, Maharashtra, India
Hybrid
Our Purpose Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. Title And Summary Senior Software Engineer - Java/Scala development, Hadoop, Spark Overview The Loyalty Rewards & Segments program is looking for a Senior Software Engineer to drive the solutions to enable our customers to optimize their loyalty programs from beginning to end. We build and manage global solutions that enable merchants and issuers to offer points, miles, or cashback benefits seamlessly to their cardholders. Customers are able to build and implement custom promotions using our sophisticated rules engine to target spend categories or target increasing engagement with cardholders. The ideal candidate is passionate about the technology, proven records of developing high quality, secure code that is modular, functional and testable. Role The Senior Software Engineer will be responsible for development solutions with high level of innovation, high quality and faster time to market. This position interacts with product managers, engineering leaders, architects and software developers and business operations on the definition and delivery of highly scalable and secure solutions. The Role Includes Hands-on developer who writes high quality, secure code that is modular, functional and testable. Create or introduce, test, and deploy new technology to optimize the service Contribute to all parts of the software’s development including design, development, documentation, and testing. Have strong ownership Communicate, collaborate and work effectively in a global environment. Responsible for ensuring application stability in production by creating solutions that provide operational health. Mentoring and leading new developers while driving modern engineering practices. Communicate, collaborate and work effectively in a global environment All About You Strong analytical and excellent problem-solving skills and experience working in an Agile environment. Experience with XP, TDD and BDD in the software development processes Proficiency in Java, Scala & SQL (Oracle, Postgres, H2, Hive, & HBase) & building pipelines Expertise and Deep understanding on Hadoop Ecosystem including HDFS, YARN, MapReduce, Tools like Hive, Pig/Flume, Data processing framework like Spark & Cloud platform, Orchestration Tools - Apache Nifi / Airflow, Apache Kafka Expertise in Web applications (Springboot Angular, Java, PCF), Web Services (REST/OAuth), and Big Data Technologies (Hadoop, Spark, Hive, HBase) and tools ( Sonar, Splunk, Dynatrace) Expertise SQL, Oracle and Postgres Experience in microservices, event driven architecture Soft skills: strong verbal and written communication to demo features to product owners; strong leadership quality to mentor and support junior team members, proactive and has initiative to take development work from inception to implementation. Familiar with secure coding standards (e.g., OWASP, CWE, SEI CERT) and vulnerability management Corporate Security Responsibility All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must: Abide by Mastercard’s security policies and practices;Ensure the confidentiality and integrity of the information being accessed;Report any suspected information security violation or breach, andComplete all periodic mandatory security trainings in accordance with Mastercard’s guidelines. R-246311
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2