Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
12.0 - 15.0 years
35 - 40 Lacs
Hyderabad
Work from Office
The position will involve working with a very experienced physical design team of Server SOC and is responsible for delivering the physical design of tiles and FullChip to meet challenging goals for frequency, power and other design requirements for AMD next generation processors in a fast-paced environment on cutting edge technology. THE PERSON: The ideal candidate has significant experience in industry, with good attitude who seeks new challenges and has good analytical and problem-solving skills. You have excellent communication and presentation skills, demonstrated through technical publications, presentations, trainings, executive briefings, etc You are highly adept at collaboration among top-thinkers and engineers alike, ready to mentor and guide, and help to elevate the knowledge and skills of the team around you. KEY RESPONSIBILITIES: RTL to GDS2 flow Handling Floor-plan, Physical Implementation of Power-plan, Synthesis, Placement, CTS, , Routing, Extraction, Timing Closure (Tile level, Full chip), Physical Verification (DRC LVS), Crosstalk Analysis, EM/IR Handling different PNR tools - Synopsys FusionCompiler, Cadence Innovus, PrimeTime, StarRC, Mentor Graphics Calibre, Apache Redhawk Identify and implement opportunities for improving PPA PREFERRED EXPERIENCE: 12+ years of professional experience in physical design, preferably with high performance designs. Experience in automated synthesis and timing driven place and route of RTL blocks for high speed datapath and control logic applications. Experience in automated design flows for clock tree synthesis, clock and power gating techniques, scan stitching, design optimization for improved timing/power/area, and design cycle time reduction. Experience in floorplanning, establishing design methodology, IP integration, checks for logic equivalence, physical/timing/electrical quality, and final signoff for large IP delivery Strong experience with tools for logic synthesis, place and route, timing analysis, and design checks for physical and electrical quality, familiarity with tools for schematics, layout, and circuit/logic simulation Experience in STA, full chip timing Versatility with scripts to automate design flow. Proficiency in scripting language, such as, Perl and Tcl. Strong communication skills, ability to multi-task across projects, and work with geographically spread out teams Experience in FinFET Dual Patterning nodes such as 16/14/10/7/5nm/3nm Excellent physical design and timing background. Good understanding of computer organization/architecture is preferred. Strong analytical/problem solving skills and pronounced attention to details. ACADEMIC CREDENTIALS: Qualification: Bachelors or Masters in Electronics/Electrical Engineering
Posted 2 days ago
2.0 - 7.0 years
13 - 17 Lacs
Noida, Bengaluru
Work from Office
The Technical Support Engineer role offers an outstanding opportunity to assist Adobes world-class Commerce Cloud customer base. You will address technical application and infrastructure issues, ensuring our customers are set for success. Take ownership of high-priority customer issues while collaborating with Adobe Support and Engineering teams. Thoroughly solve and document customer cases for effective problem and preventative case management. Our mission is to build memorable customer experiences, making them outstandingly successful with our products! What you'll do Be the first point of contact for customer concerns related to technical issues with the Magento E-commerce application. Advocate for customers and represent their needs with internal product and engineering teams. Provide timely responses and resolution to technical, product, and cloud infrastructure inquiries. Ensure resolution within established Service Level Agreement guidelines. Troubleshoot and qualify cases before advancing to engineering. Answer questions regarding product functionality and usage. Manage high-priority technical incidents and critical outages. Contribute to product content creation, including KB articles, whitepapers, and forum participation. Conduct knowledge transfer sessions to reduce critical issues within Adobe. What you need to succeed At least 2+ years of working experience with Magento or Commerce Cloud. 5 years of experience in an enterprise software or cloud support environment. Excellent oral and written communication skills in English. Strong knowledge of Linux command line. Familiarity with Apache, NGINX, Redis, DNS, CDN, and SSL. Deep expertise in MySQL and database queries. Familiarity with programming/scripting languages such as Node.js, Perl, Java, and Python. Understanding of modern web technologies and their relationships. Experience in solving web application and performance issues. Ability to analyze issues via logs and other sources for in-depth reviews. Strong organizational and time management skills. Proficiency in technical problem-solving methodologies. Ability to adapt and thrive in a dynamic environment. Displaying empathy and transparency when customers advance their concerns, showcasing high patience and skill. Willingness to work different shifts, including North America hours, and be available for on-call rotation, off-hours, holidays, and weekends. Understand the business impact of issues, report call generators, severe issues, trends, feature requests, and common questions.
Posted 2 days ago
4.0 - 7.0 years
30 - 35 Lacs
Bengaluru
Work from Office
Incident Management Troubleshooting issues Contributing to development Collaborating with another team Suggesting improvements Enhancing system performance Training new employees Mandatory Skills AWS Redshift PLSQL Apache Airflow Unix ETL DWH
Posted 2 days ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Banyan Software provides the best permanent home for successful enterprise software companies, their employees, and customers. We are on a mission to acquire, build and grow great enterprise software businesses all over the world that have dominant positions in niche vertical markets. In recent years, Banyan was named the #1 fastest-growing private software company in the US on the Inc. 5000 and amongst the top 10 fastest-growing companies by the Deloitte Technology Fast 500. Founded in 2016 with a permanent capital base setup to preserve the legacy of founders, Banyan focuses on a buy and hold for life strategy for growing software companies that serve specialized vertical markets. About Campus Café Our student information system is an integrated SIS that manages the entire student life cycle including Admissions, Student Services, Business Office, Financial Aid, Alumni Development and Career Tracking functions. Our SIS is a single database student information system that allows clients to manage marketing, recruitment, applications, course registration, billing, transcripts, financial aid, career tracking, alumni development, fundraising, student attendance and class rosters. It allows real-time access to data that is more accurate and available when our users it. Our SaaS model means clients don’t need to build and maintain an expensive and complex IT infrastructure. Our APIs and custom integrations will keep all their data in sync and accessible in real-time. Since the database is fully integrated, everything is updated in real-time and there’s no waiting for information. Position Overview We are looking for a versatile System Administrator / DevOps Engineer to support and enhance our Azure-hosted infrastructure, running Java applications on Tomcat, backed by Microsoft SQL Server on Windows servers. The ideal candidate will have a solid background in Windows system administration, hands-on experience with Azure services, and a DevOps mindset focused on automation, reliability, and performance. Key Responsibilities Manage and maintain Windows Server environments hosted in Azure. Support the deployment, configuration, and monitoring of Java applications running on Apache Tomcat. Administer Microsoft SQL Server, including performance tuning, backups, and availability in Azure. Automate infrastructure tasks, such as java and tomcat upgrades, using PowerShell, Azure CLI, or Azure Automation. Build and maintain CI/CD pipelines for Java-based applications using tools such as Jenkins, or GitHub Actions. Manage/monitor Azure resources: Virtual Machines, Azure SQL, App Services, Azure Monitor, and Networking (App Gateway, Firewall, VNets, NSGs, VPN). Implement and monitor backup, recovery, and security policies within the Azure environment. Collaborate with development and operations teams to optimize deployment strategies and system performance. Troubleshoot issues across systems, applications, and cloud services. Required Skills & Experience 3+ years of experience in system administration or DevOps, with a focus on Windows environments. Experience deploying and managing Java applications on Tomcat. Strong knowledge of Microsoft SQL Server (on-prem and/or Azure-hosted). Solid experience with Azure IaaS and PaaS services (e.g., Azure VMs, Azure SQL, Azure Monitor, Azure Storage). Proficiency in scripting and automation (PowerShell, Azure CLI, or similar). Familiarity with CI/CD tools such as Azure DevOps, Jenkins, or GitHub Actions. Understanding of networking, security groups, and VPNs in a cloud context. Preferred Skills Experience with Azure Infrastructure as Code (e.g., ARM templates, Bicep, or Terraform). Familiarity with Azure Active Directory, RBAC, and Identity & Access Management. Experience with containerization (Docker) and/or orchestration (AKS) is a plus. Microsoft Azure certifications (AZ-104, AZ-400) or equivalent experience. Diversity, Equity, Inclusion & Equal Employment Opportunity at Banyan: Banyan affirms that inequality is detrimental to our Global Teams, associates, our Operating Companies, and the communities we serve. As a collective, our goal is to impact lasting change through our actions. Together, we unite for equality and equity. Banyan is committed to equal employment opportunities regardless of any protected characteristic, including race, color, genetic information, creed, national origin, religion, sex, affectional or sexual orientation, gender identity or expression, lawful alien status, ancestry, age, marital status, or protected veteran status and will not discriminate against anyone on the basis of a disability. We support an inclusive workplace where associates excel based on personal merit, qualifications, experience, ability, and job performance. Show more Show less
Posted 2 days ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Position Overview Job Title: Engineer, AS Location: Pune, India Corporate Title: AS Role Description As an Associate for Technology in our Technology team, you will be a strong engineer who will help solve business initiatives. You’ll be an integral part of the bank’s technology group, who will be a strong code committer. Deutsche Bank is investing heavily in technology, which means we are investing in you. Join us here, and you’ll constantly be looking ahead. What We’ll Offer You As part of our flexible scheme, here are just some of the benefits that you’ll enjoy Best in class leave policy Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your Key Responsibilities What You’ll Do Designs database using frameworks and available components to meet the requirements of a business and documentation of the design Understand the business requirement and create a Logical Data Model. Ability to convert the Logical data model to a physical data model. Participates in design reviews and is a gatekeeper to all the changes to the database. Contributes to the definition of development and SW standards to implement/reflect DB guidelines (naming conventions, encryption, and security settings) and ensure standards are adhered to Provides Level 3 support for technical infrastructure components such as databases. Contributes to root cause/problem analysis Develops Software Components in accordance with the Detailed Software Requirements Specification, the functional design and technical design documents Your Skills And Experience Skills You’ll Need Database Design techniques Database : Proficient Oracle 10G or higher Failover strategies Performance tuning, troubleshooting and monitoring database Familiar with various design and architectural patterns Should be able to perform Code and Detail Design Reviews Agile processes awareness and handson experience Java and J2EE Technologies Spring MVC, JMS, Apache Camel Spring Batch, Oracle, Unix Command, CI/CD, GIT/SVN Junit/Mockito (Any Unit Test Frameworks) Sonar Qube/ Emma Code Coverage and Code quality tools Familiarity with build tools such as Ant, Maven, and Gradle SSL, OATH, JWT Performance monitoring Tools like Java Heap Analyzer, Visual JVM, JMX Console Micro Services Spring Boot, Spring Cloud Kubernetes, Zuul API Gateway, Spring JPA Spring Sleuth, Any Cloud Experience MQ, Subscribe/Publish model Good to Have JBOSS, Kafta, Google Cloud platform Jenkin, Teamcity, Gafana, Prometheus Camunda or any other workflow implementation experience. How We’ll Support You Training and development to help you excel in your career Coaching and support from experts in your team A culture of continuous learning to aid progression A range of flexible benefits that you can tailor to suit your needs Skills That Will Help You Excel Ability to write high quality code according to DB standards Ability to solve business or production problems Strong analytical skills Proficient communication skills Proficient English language skills (written/verbal) Ability to work in virtual teams and in matrixed organizations About Us And Our Teams Please visit our company website for further information: https://www.db.com/company/company.htm We strive for a culture in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively. Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment. Show more Show less
Posted 2 days ago
6.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Responsibilities & Skills Experience: 6+ Years Engineer should have strong knowledge and development experience in Vue.JS. Good exposure on front-end technologies including CSS3, JavaScript, and HTML5. Experience in asynchronous programming, modules, event loop. File system operations, http, and webservers. Need to have an understanding on web application hosting, Tomcat or Apache. Strong have experience with Linux platforms and commands. Should be able to work independently according to the project requirement. Strong communication and inter-personal skills. Ability to diagnose and troubleshoot issues during development. Need to create relevant project documents such as HDD & LDD, unit test case reports. Knowledge of AJAX, jQuery, MySQL Show more Show less
Posted 2 days ago
89.0 years
0 Lacs
Pune/Pimpri-Chinchwad Area
On-site
Company Description GFK - Growth from Knowledge. For over 89 years, we have earned the trust of our clients around the world by solving critical questions in their decision-making process. We fuel their growth by providing a complete understanding of their consumers’ buying behavior, and the dynamics impacting their markets, brands and media trends. In 2023, GfK combined with NIQ, bringing together two industry leaders with unparalleled global reach. With a holistic retail read and the most comprehensive consumer insights - delivered with advanced analytics through state-of-the-art platforms - GfK drives “Growth from Knowledge”. Job Description It's an exciting time to be a builder. Constant technological advances are creating an exciting new world for those who understand the value of data. The mission of NIQ’s Media Division is to turn NIQ into the global leader that transforms how consumer brands plan, activate and measure their media activities. Recombine is the delivery area focused on maximising the value of data assets in our NIQ Media Division. We apply advanced statistical and machine learning techniques to unlock deeper insights, whilst integrating data from multiple internal and external sources. Our teams develop data integration products across various markets and product areas, delivering enriched datasets that power client decision-making. Role Overview We are looking for a Principal Software Engineer for our Recombine delivery area to provide technical leadership within our development teams, ensuring best practices, architectural coherence, and effective collaboration across projects. This role is ideal for a highly experienced engineer who can bridge the gap between data engineering, data science, and software engineering, helping teams build scalable, maintainable, and well-structured data solutions. As a Principal Software Engineer, you will play a hands-on role in designing and implementing solutions while mentoring developers, influencing technical direction, and driving best practices in software and data engineering. This role includes line management responsibilities, ensuring the growth and development of team members. The role will be working within an AWS environment, leveraging the power of cloud-native technologies and modern data platforms Key Responsibilities Technical Leadership & Architecture Act as a technical architect, ensuring alignment between the work of multiple development teams in data engineering and data science. Design scalable, high-performance data processing solutions within AWS, considering factors such as governance, security, and maintainability. Drive the adoption of best practices in software development, including CI/CD, testing strategies, and cloud-native architecture. Work closely with Product Owners to translate business needs into technical solutions. Hands-on Development & Technical Excellence Lead by example through high-quality coding, code reviews, and proof-of-concept development. Solve complex engineering problems and contribute to critical design decisions. Ensure effective use of AWS services, including AWS Glue, AWS Lambda, Amazon S3, Redshift, and EMR. Develop and optimise data pipelines, data transformations, and ML workflows in a cloud environment. Line Management & Team Development Provide line management to engineers, ensuring their professional growth and development. Conduct performance reviews, set development goals, and mentor team members to enhance their skills. Foster a collaborative and high-performing engineering culture, promoting knowledge sharing and continuous improvement beyond team boundaries. Support hiring, onboarding, and career development initiatives within the engineering team. Collaboration & Cross-Team Coordination Act as the technical glue between data engineers, data scientists, and software developers, ensuring smooth integration of different components. Provide mentorship and guidance to developers, helping them level up their skills and technical understanding. Work with DevOps teams to improve deployment pipelines, observability, and infrastructure as code. Engage with stakeholders across the business, translating technical concepts into business-relevant insights. Governance, Security & Data Best Practices Champion data governance, lineage, and security across the platform. Advocate for and implement scalable data architecture patterns, such as Data Mesh, Lakehouse, or event-driven pipelines. Ensure compliance with industry standards, internal policies, and regulatory requirements. Qualifications Requirements & Experience Strong software engineering background with experience in designing and building production-grade applications in Python, Scala, Java, or similar languages. Proven experience with AWS-based data platforms, specifically AWS Glue, Redshift, Athena, S3, Lambda, and EMR. Expertise in Apache Spark and AWS Lake Formation, with experience building large-scale distributed data pipelines. Experience with workflow orchestration tools like Apache Airflow or AWS Step Functions. Cloud experience in AWS, including containerisation (Docker, Kubernetes, ECS, EKS) and infrastructure as code (Terraform, CloudFormation). Strong knowledge of modern software architecture, including microservices, event-driven systems, and distributed computing. Experience leading teams in an agile environment, with a strong understanding of CI/CD pipelines, automated testing, and DevOps practices. Excellent problem-solving and communication skills, with the ability to engage with both technical and non-technical stakeholders. Proven line management experience, including mentoring, career development, and performance management of engineering teams. Additional Information Our Benefits Flexible working environment Volunteer time off LinkedIn Learning Employee-Assistance-Program (EAP) About NIQ NIQ is the world’s leading consumer intelligence company, delivering the most complete understanding of consumer buying behavior and revealing new pathways to growth. In 2023, NIQ combined with GfK, bringing together the two industry leaders with unparalleled global reach. With a holistic retail read and the most comprehensive consumer insights—delivered with advanced analytics through state-of-the-art platforms—NIQ delivers the Full View™. NIQ is an Advent International portfolio company with operations in 100+ markets, covering more than 90% of the world’s population. For more information, visit NIQ.com Want to keep up with our latest updates? Follow us on: LinkedIn | Instagram | Twitter | Facebook Our commitment to Diversity, Equity, and Inclusion NIQ is committed to reflecting the diversity of the clients, communities, and markets we measure within our own workforce. We exist to count everyone and are on a mission to systematically embed inclusion and diversity into all aspects of our workforce, measurement, and products. We enthusiastically invite candidates who share that mission to join us. We are proud to be an Equal Opportunity/Affirmative Action-Employer, making decisions without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability status, age, marital status, protected veteran status or any other protected class. Our global non-discrimination policy covers these protected classes in every market in which we do business worldwide. Learn more about how we are driving diversity and inclusion in everything we do by visiting the NIQ News Center: https://nielseniq.com/global/en/news-center/diversity-inclusion Show more Show less
Posted 2 days ago
3.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Company: Callerdesk.io Location: Noida, Uttar Pradesh | On-site Employment Type: Full-Time | Permanent Experience Required: 3+ Years Job Description We are looking for an experienced and proactive MySQL Database Administrator to join our IT services team. The ideal candidate will have strong hands-on experience with MySQL architecture, database performance tuning, replication, and high-availability setups. If you’re passionate about database optimization, ensuring uptime, and working closely with development teams, we’d love to hear from you. Minimum Requirements Educational Qualification: B.Tech / B.E. / MCA / M.Sc in Computer Science or a related field Experience: • Minimum 3 years in IT/Software Development, Web or Mobile App Projects • Minimum 3 years in MySQL Database Administration & Performance Tuning Responsibilities - Administer and maintain MySQL Server databases, including installation and configuration. - Monitor system health and performance, ensuring high availability and security - Perform real-time troubleshooting, diagnostics, and resolution of database issues - Recommend and implement database solutions to improve efficiency. - Automate recurring processes, maintain documentation, and track issues - Support developers with schema refinement, partitioning, and query tuning - Manage GTID replication, Master-Slave setups, and InnoDB clusters. - Set up and maintain DR (Disaster Recovery) and ProxySQL for high load management. - Work on physical backup/restoration and point-in-time recovery - Optimize SQL queries, triggers, events, stored procedures, and functions. - Experience with Linux OS, tools like MySQL Workbench, SQLyog, and database pipelines (Python / Apache NiFi preferred) Preferred Skills - Strong understanding of MySQL internal architecture - Hands-on experience in MySQL Enterprise Edition - Proficient in Linux server environments - Experience working on e-governance or large-scale IT projects is a plus. Role Details Role Title: MySQL Database Administrator Department: Engineering Software Industry: IT Services & Consulting Function: DBA/Data Warehousing Educational Background UG: B.Tech/B.E. in Any Specialization PG: MCA / M.Sc (Science) in Any Specialization Package: 3.5 lac to 6 Lakh Per annum Key Skills ‘MySQL DBA,’ ‘GTID Replication’, ‘Linux Server Administration’, ‘High Availability Setup’, ‘InnoDB Cluster’, ‘ProxySQL’, ‘Query Optimization’, ‘Database Backup & Recovery’, ‘Apache NiFi’, ‘MySQL Workbench’, ‘SQLyog’, ‘Stored Procedures’, ‘Triggers’, ‘E-Governance Projects’ Ready to join a high-impact team and take your MySQL expertise to the next level? Apply now , and let’s build robust, scalable systems together. Show more Show less
Posted 2 days ago
8.0 - 13.0 years
10 - 15 Lacs
Bengaluru
Work from Office
Working at Atlassian Atlassians can choose where they work - whether in an office, from home, or a combination of the two. That way, Atlassians have more control over supporting their family, personal goals, and other priorities. We can hire people in any country where we have a legal entity. Interviews and onboarding are conducted virtually, a part of being a distributed-first company. ","responsibilities":" Team: Core Engineering Reliability Team Collaborate with engineering and TPM leaders, developers, and process engineers to create data solutions that extract actionable insights from incident and post-incident management data, supporting objectives of incident prevention and reducing detection, mitigation, and communication times. Work with diverse stakeholders to understand their needs and design data models, acquisition processes, and applications that meet those requirements. Add new sources, implement business rules, and generate metrics to empower product analysts and data scientists. Serve as the data domain expert, mastering the details of our incident management infrastructure. Take full ownership of problems from ambiguous requirements through rapid iterations. Enhance data quality by leveraging and refining internal tools and frameworks to automatically detect issues. Cultivate strong relationships between teams that produce data and those that build insights. ","qualifications":" Minimum Qualifications / Your background: BS in Computer Science or equivalent experience with 8+ years as a Senior Data Engineer or similar role 10+ Years of progressive experience in building scalable datasets and reliable data engineering practices. Proficiency in Python, SQL, and data platforms like DataBricks Proficiency in relational databases and query authoring (SQL). Demonstrable expertise designing data models for optimal storage and retrieval to meet product and business requirements. Experience building and scaling experimentation practices, statistical methods, and tools in a large scale organization Excellence in building scalable data pipelines using Spark (SparkSQL) with Airflow scheduler/executor framework or similar scheduling tools. Expert experience working with AWS data services or similar Apache projects (Spark, Flink, Hive, and Kafka). Understanding of Data Engineering tools/frameworks and standards to improve the productivity and quality of output for Data Engineers across the team. Well versed in modern software development practices (Agile, TDD, CICD) Desirable Qualifications Demonstrated ability to design and operate data infrastructure that deliver high reliability for our customers. Familiarity working with datasets like Monitoring, Observability, Performance, etc.. Benefits Perks Atlassian offers a wide range of perks and benefits designed to support you, your family and to help you engage with your local community. Our offerings include health and wellbeing resources, paid volunteer days, and so much more. To learn more, visit
Posted 2 days ago
4.0 - 8.0 years
11 - 15 Lacs
Pune
Work from Office
About the role: Want to be on a team that full of results-driven individuals who are constantly seeking to innovate Want to make an impact At SailPoint, our Data Platform team does just that. SailPoint is seeking a Senior Staff Data/Software Engineer to help build robust data ingestion and processing system to power our data platform. This role is a critical bridge between teams. It requires excellent organization and communication as the coordinator of work across multiple engineers and projects. We are looking for well-rounded engineers who are passionate about building and delivering reliable, scalable data pipelines. This is a unique opportunity to build something from scratch but have the backing of an organization that has the muscle to take it to market quickly, with a very satisfied customer base. Responsibilities : Spearhead the design and implementation of ELT processes, especially focused on extracting data from and loading data into various endpoints, including RDBMS, NoSQL databases and data-warehouses. Develop and maintain scalable data pipelines for both stream and batch processing leveraging JVM based languages and frameworks. Collaborate with cross-functional teams to understand diverse data sources and environment contexts, ensuring seamless integration into our data ecosystem. Utilize AWS service-stack wherever possible to implement lean design solutions for data storage, data integration and data streaming problems. Develop and maintain workflow orchestration using tools like Apache Airflow. Stay abreast of emerging technologies in the data engineering space, proactively incorporating them into our ETL processes. Organize work from multiple Data Platform teams and customers with other Data Engineers Communicate status, progress and blockers of active projects to Data Platform leaders Thrive in an environment with ambiguity, demonstrating adaptability and problem-solving skills. Qualifications : BS in computer science or a related field. 10+ years of experience in data engineering or related field. Demonstrated system-design experience orchestrating ELT processes targeting data Excellent communication skills Demonstrated ability to internalize business needs and drive execution from a small team Excellent organization of work tasks and status of new and in flight tasks including impact analysis of new work Strong understanding of python Good understanding of Java Strong understanding of SQL and data modeling Familiarity with airflow Hands-on experience with at least one streaming or batch processing framework, such as Flink or Spark. Hands-on experience with containerization platforms such as Docker and container orchestration tools like Kubernetes. Proficiency in AWS service stack. Experience with DBT, Kafka, Jenkins and Snowflake. Experience leveraging tools such as Kustomize, Helm and Terraform for implementing infrastructure as code. Strong interest in staying ahead of new technologies in the data engineering space. Comfortable working in ambiguous team-situations, showcasing adaptability and drive in solving novel problems in the data-engineering space. Preferred Experience with AWS Experience with Continuous Delivery Experience instrumenting code for gathering production performance metrics Experience in working with a Data Catalog tool ( Ex: Atlan ) What success looks like in the role Within the first 30 days you will: Onboard into your new role, get familiar with our product offering and technology, proactively meet peers and stakeholders, set up your test and development environment. Seek to deeply understand business problems or common engineering challenges Learn the skills and abilities of your teammates and align expertise with available work By 90 days: Proactively collaborate on, discuss, debate and refine ideas, problem statements, and software designs with different (sometimes many) stakeholders, architects and members of your team. Increasing team velocity and showing contribution to improving maturation and delivery of Data Platform vision. By 6 months: Collaborates with Product Management and Engineering Lead to estimate and deliver small to medium complexity features more independently. Occasionally serve as a debugging and implementation expert during escalations of systems issues that have evaded the ability of less experienced engineers to solve in a timely manner. Share support of critical team systems by participating in calls with customers, learning the characteristics of currently running systems, and participating in improvements. Engaging with team members. Providing them with challenging work and building cross skill expertise Planning project support and execution with peers and Data Platform leaders SailPoint is an equal opportunity employer and we welcome all qualified candidates to apply to join our team. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, protected veteran status, or any other category protected by applicable law. Alternative methods of applying for employment are available to individuals unable to submit an application through this site because of a disability. Contact hr@sailpoint.com or mail to 11120 Four Points Dr, Suite 100, Austin, TX 78726, to discuss reasonable accommodations.
Posted 2 days ago
5.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Streaming data Technical skills requirements :- Mandatory Skills- Spark, Scala, AWS, Hadoop (Big Data) Experience- 5+ Years Solid hands-on and Solution Architecting experience in Big-Data Technologies (AWS preferred) - Hands on experience in: AWS Dynamo DB, EKS, Kafka, Kinesis, Glue, EMR - Hands-on experience of programming language like Scala with Spark. - Good command and working experience on Hadoop Map Reduce, HDFS, Hive, HBase, and/or No-SQL Databases - Hands on working experience on any of the data engineering analytics platform (Hortonworks Cloudera MapR AWS), AWS preferred - Hands-on experience on Data Ingestion Apache Nifi, Apache Airflow, Sqoop, and Oozie - Hands on working experience of data processing at scale with event driven systems, message queues (Kafka FlinkSpark Streaming) - Hands on working Experience with AWS Services like EMR, Kinesis, S3, CloudFormation, Glue, API Gateway, Lake Foundation. - Hands on working Experience with AWS Athena - Data Warehouse exposure on Apache Nifi, Apache Airflow, Kylo - Operationalization of ML models on AWS (e.g. deployment, scheduling, model monitoring etc.) - Feature Engineering Data Processing to be used for Model development - Experience gathering and processing raw data at scale (including writing scripts, web scraping, calling APIs, write SQL queries, etc.) - Experience building data pipelines for structured unstructured, real-time batch, events synchronous asynchronous using MQ, Kafka, Steam processing - Hands-on working experience in analysing source system data and data flows, working with structured and unstructured data - Must be very strong in writing SQL queries - Strengthen the Data engineering team with Big Data solutions - Strong technical, analytical, and problem-solving skills Show more Show less
Posted 2 days ago
8.0 years
0 Lacs
Greater Kolkata Area
On-site
Role: Technical Architect Experience: 8-15 years Location: Bangalore, Chennai, Gurgaon, Pune, and Kolkata Mandatory Skills: Python, Pyspark, SQL, ETL, Pipelines, Azure Databricks, Azure Data Factory, & Architect Designing. Primary Roles and Responsibilities: Developing Modern Data Warehouse solutions using Databricks and AWS/ Azure Stack Ability to provide solutions that are forward-thinking in data engineering and analytics space Collaborate with DW/BI leads to understand new ETL pipeline development requirements. Triage issues to find gaps in existing pipelines and fix the issues Work with business to understand the need in reporting layer and develop data model to fulfill reporting needs Help joiner team members to resolve issues and technical challenges. Drive technical discussion with client architect and team members Orchestrate the data pipelines in scheduler via Airflow Skills and Qualifications: Bachelor's and/or master’s degree in computer science or equivalent experience. Must have total 8+ yrs. of IT experience and 5+ years' experience in Data warehouse/ETL projects. Deep understanding of Star and Snowflake dimensional modelling. Strong knowledge of Data Management principles Good understanding of Databricks Data & AI platform and Databricks Delta Lake Architecture Should have hands-on experience in SQL, Python and Spark (PySpark) Candidate must have experience in AWS/ Azure stack Desirable to have ETL with batch and streaming (Kinesis). Experience in building ETL / data warehouse transformation processes Experience with Apache Kafka for use with streaming data / event-based data Experience with other Open-Source big data products Hadoop (incl. Hive, Pig, Impala) Experience with Open Source non-relational / NoSQL data repositories (incl. MongoDB, Cassandra, Neo4J) Experience working with structured and unstructured data including imaging & geospatial data. Experience working in a Dev/Ops environment with tools such as Terraform, CircleCI, GIT. Proficiency in RDBMS, complex SQL, PL/SQL, Unix Shell Scripting, performance tuning and troubleshoot Databricks Certified Data Engineer Associate/Professional Certification (Desirable). Comfortable working in a dynamic, fast-paced, innovative environment with several ongoing concurrent projects Should have experience working in Agile methodology Strong verbal and written communication skills. Strong analytical and problem-solving skills with a high attention to detail. Show more Show less
Posted 2 days ago
10.0 - 11.0 years
25 - 30 Lacs
Pune
Work from Office
Lead Software Engineer in Test Overview: Mastercard is a technology company in the Global Payments Industry. We operate the world s fastest payments processing network, connecting consumers, financial institutions, merchants, governments and businesses in more than 210 countries and territories. Mastercard products and solutions make everyday commerce activities - such as shopping, travelling, running a business and managing finances - easier, more secure and more efficient for everyone. MasterCard is seeking talented individuals to join our Digital team in Pune, India. MasterCard is researching and developing the next generation of products and services to enable consumers to securely, efficiently, and intelligently conduct transactions regardless of channel. Whether through traditional retail, mobile, or e-commerce, MasterCard innovation is leading the digital convergence of traditional and emerging payments technologies across a wide variety of new devices and services. Join our team and help shape the future of connected commerce! Job Overview: In an exciting and fast-paced environment focused on developing payment authentication and security solutions, this position offers technical leadership and expertise throughout the development lifecycle of the ecommerce payment authentication platform under the Authentication program for Digital Authentication Services. Role We are looking for an Automation Tester to join the DAS team. This is a pivotal role, responsible for QA, Loading Testing and Automation various data-driven pipelines. The position involves managing testing infrastructure for Functional test, Automation and co-ordination of testing that spans multiple programs and projects. The ideal candidate will have experience working with large-scale data and automation testing of Java, Cloud Native application/services. Position will lead the development and maintenance of automated testing frameworks Provide technical leadership for new major initiatives Deliver innovative, cost-effective solutions which align to enterprise standards Drive the reduction of time spent testing Work to minimize manual testing by identifying high-ROI test cases and automating them Be an integrated part of an Agile engineering team, working interactively with software engineer leads, architects, testing engineers, and product managers from the beginning of the development cycle Help ensure functionality delivered in each release is fully tested end to end Manage multiple priorities and tasks in a dynamic work environment All About You Bachelor s degree in computer science or equivalent work experience with hands on technical and quality engineering skills Expertise in testing methods, standards, and conventions including automation and test case creation Excellent technical acumen, strong organizational and problem-solving skills with great attention to detail, critical thinking, solid communication, and proven leadership skills Solid leadership and mentoring skills with the ability to drive change Experience in designing and building testing Automation Frameworks. Expert in API Testing Experience in UI and Mobile automation and testing against different browsers & devices. Knowledge of Java, SQLs, REST APIs, code reviews, scanning tools and configuration, and branching techniques Experience with application monitoring tools such as Dynatrace and Splunk Experience with Performance Testing Experience with DevOps practices (continuous integration and delivery, and tools such as Jenkins) Nice to have knowledge or prior experience with any of the following Apache Kafka Microservices architecture Build tools like Jenkins Corporate Security Responsibility Every person working for, or on behalf of, Mastercard is responsible for information security. All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and therefore, it is expected that the successful candidate for this position must: Abide by Mastercard s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard s guidelines.
Posted 2 days ago
4.0 - 9.0 years
25 - 30 Lacs
Pune
Work from Office
Senior Software Engineer Overview Prepaid Management Services is the division of MasterCard that concentrates on Prepaid Solutions such as our Multi-Currency CashPassport product. Traditionally focused on the Travel sector this business unit is driving forward Prepaid throughout the world with innovative and leading solutions that we integrate with Global brands. This role is within the Global Technology Services (GTS) team which is part of the wider MasterCard Operations & Technologies group. We provide high quality evolutionary and operational capabilities to support the MasterCard Prepaid Management Services business. A hands-on software engineer working within multi-disciplined, agile teams. Responsible for producing high-quality, appropriate web-based solutions for both internal and external customers in Prepaid Management Services. Responsible for the development produced by the Scrum team, ensuring all development work adheres to design and development standards, guidelines and roadmap defined by the Prepaid Management Services Leadership and Architecture teams. Contributes to the coding and testing of new development and changes, ensuring the code is maintainable and to a high standard of quality. Have you worked in application development in a fast paced Agile environment? Are you passionate about providing technology driven solutions to address business needs? Can you provide innovative ideas and ensures continuous improvement as part of day to day work? Role Key Responsibilities Take a participatory role in sprint planning, daily stand-ups, demonstrations and retrospectives. Analyse current processes and systems to produce designs that can be scaled and evolved. Can translate a technical design to implemented code. Responsible for the development of readable and maintainable code, and appropriate unit tests. Actively seeks to minimize code and simplify architecture. Support test and build automation. Produce high-level and detailed estimates. Adheres to the development process and suggests improvements where appropriate. Ensures individual and team tasks are performed on-time by communicating and working closely with other members of the team. Retains a focus on completion, identifying and resolving issues. Produces technical design documentation as required and ensures work complies with the architectural roadmap. Identifies and updates existing design documents impacted due to changes All About You Key Skills and Experience Required Good experience in JAVA development (JAVA 6+, Hibernate, Spring, Spring boot, WebServices (REST and SOAP), Eclipse, Apache), Angular JS, Scala Good database development knowledge (Oracle v10+, PLSQL) Experience with test-driven development practices and technologies, e.g. JUnit, Maven, etc. Experience with CI/CD using Jenkins pipeline Experience with Agile development methods Experience with version control systems (GIT), and CI tools (Jenkins, Fortify, Sonar) Strong oral and written communication skills Desirable SDLC support tools (ALM, Confluence, Selenium, Sharepoint) Code packaging and deployment automation Financial services experience (Cards/PCI) Personal Qualities Flexible Creative Excellent problem solving skills Good communicator Self-starter Leadership ability
Posted 2 days ago
3.0 - 8.0 years
15 - 20 Lacs
Pune
Work from Office
Step into role of a Senior Data Engineer. At Barclays, innovation isn t encouraged, its expected. As a Senior Data Engineer you will build and maintain the systems that collect, store, process, and analyse data, such as data pipelines, data warehouses and data lakes to ensure that all data is accurate, accessible, and secure. To be a successful Senior Data Engineer, you should have experience with: Hands on experience to work with large scale data platforms & in development of cloud solutions in AWS data platform with proven track record in driving business success. Strong understanding of AWS and distributed computing paradigms, ability to design and develop data ingestion programs to process large data sets in Batch mode using Glue, Lambda, S3, redshift and snowflake and data bricks. Ability to develop data ingestion programs to ingest real-time data from LIVE sources using Apache Kafka, Spark Streaming and related technologies. Hands on programming experience in python and PY-Spark. Understanding of Dev Ops Pipelines using Jenkins, GitLab & should be strong in data modelling and Data architecture concepts & well versed with Project management tools and Agile Methodology. Sound knowledge of data governance principles and tools (alation/glue data quality, mesh), Capable of suggesting solution architecture for diverse technology applications. Additional relevant skills given below are highly valued: Experience working in financial services industry & working in various Settlements and Sub ledger functions like PNS, Stock Record and Settlements, PNL. Knowledge in BPS, IMPACT & Gloss products from Broadridge & creating ML model using python, Spark & Java. You may be assessed on key critical skills relevant for success in role, such as risk and controls, change and transformation, business acumen, strategic thinking and digital and technology, as well as job-specific technical skills. This role is based in Pune. Purpose of the role To build and maintain the systems that collect, store, process, and analyse data, such as data pipelines, data warehouses and data lakes to ensure that all data is accurate, accessible, and secure. Accountabilities Build and maintenance of data architectures pipelines that enable the transfer and processing of durable, complete and consistent data. Design and implementation of data warehoused and data lakes that manage the appropriate data volumes and velocity and adhere to the required security measures. Development of processing and analysis algorithms fit for the intended data complexity and volumes. Collaboration with data scientist to build and deploy machine learning models. Vice President Expectations To contribute or set strategy, drive requirements and make recommendations for change. Plan resources, budgets, and policies; manage and maintain policies/ processes; deliver continuous improvements and escalate breaches of policies/procedures. . If managing a team, they define jobs and responsibilities, planning for the department s future needs and operations, counselling employees on performance and contributing to employee pay decisions/changes. They may also lead a number of specialists to influence the operations of a department, in alignment with strategic as well as tactical priorities, while balancing short and long term goals and ensuring that budgets and schedules meet corporate requirements. . If the position has leadership responsibilities, People Leaders are expected to demonstrate a clear set of leadership behaviours to create an environment for colleagues to thrive and deliver to a consistently excellent standard. The four LEAD behaviours are: L - Listen and be authentic, E - Energise and inspire, A - Align across the enterprise, D - Develop others. . OR for an individual contributor, they will be a subject matter expert within own discipline and will guide technical direction. They will lead collaborative, multi-year assignments and guide team members through structured assignments, identify the need for the inclusion of other areas of specialisation to complete assignments. They will train, guide and coach less experienced specialists and provide information affecting long term profits, organisational risks and strategic decisions. . Advise key stakeholders, including functional leadership teams and senior management on functional and cross functional areas of impact and alignment. Manage and mitigate risks through assessment, in support of the control and governance agenda. Demonstrate leadership and accountability for managing risk and strengthening controls in relation to the work your team does. Demonstrate comprehensive understanding of the organisation functions to contribute to achieving the goals of the business. Collaborate with other areas of work, for business aligned support areas to keep up to speed with business activity and the business strategies. Create solutions based on sophisticated analytical thought comparing and selecting complex alternatives. In-depth analysis with interpretative thinking will be required to define problems and develop innovative solutions. Adopt and include the outcomes of extensive research in problem solving processes. Seek out, build and maintain trusting relationships and partnerships with internal and external stakeholders in order to accomplish key business objectives, using influencing and negotiating skills to achieve outcomes.
Posted 2 days ago
3.0 - 6.0 years
7 - 11 Lacs
Gurugram
Work from Office
Responsible for design, development, modification, debug and/or maintenance of software systems What will your job look like? Knowledge of telecom domain (4G, 5G ), network architecture (including SGW, PGW, GGSN, MME, PCRF, OCS , Charging System, Mediation) Excellent troubleshooting knowledge for networks issue, UNIX and/or Linux operating systems. Excellent scripting knowledge ( Shell or Python) Hands-on on Kubernetes , Azure DevOps. Experience working with CI/CD tools such as GitLab and Jenkins, and Agile project management tools Working knowledge of Kubernetes (Specially kubtectl commands) , Docker Experience using cloud native messaging frameworks like Apache Kafka, Kafka Connect, Kafka StreamsExperience in Cassendra data base Experience with ELK stack (Elastic Logstash and Kibana) including visualizations, dashboards , monitoring & Performance tuning\ troubleshooting of elastic cluster All you need is... Why you will love this job: You will be challenged to design and develop new software applications. You will have the opportunity to work in a growing organization, with ever growing opportunities for personal growth.
Posted 2 days ago
3.0 - 8.0 years
5 - 6 Lacs
Mumbai
Work from Office
Hiring for Bigdata Hadoop Developer -- Mumbai Job Summary : We are seeking an experienced Big Data Hadoop Developer, Have strong expertise in the Hadoop ecosystem, with proven experience in managing and developing on Hadoop clusters. You will be responsible for designing, developing, optimizing, and maintaining big data solutions, as well as ensuring cluster health and performance. Key Responsibilities : Design and develop scalable big data solutions using Hadoop ecosystem tools such as HDFS, Hive, Pig, Sqoop, and MapReduce. Administer, configure, and optimize Hadoop clusters (Cloudera, Hortonworks, or Apache). Develop and maintain ETL pipelines to ingest, process, and analyze large datasets. Implement and monitor data security, backup, and recovery strategies on Hadoop clusters. Collaborate with data engineers, data scientists, and business analysts to deliver data solutions. Perform cluster performance tuning and troubleshoot issues across Hadoop services (YARN, HDFS, Hive, etc.). Write and optimize complex HiveQL and Spark jobs. Support production deployment and post-deployment monitoring. Required Skills and Qualifications : Bachelors degree in Computer Science, Engineering, or a related field. 3+ years of hands-on experience with the Hadoop ecosystem. Experience in Hadoop cluster setup, administration, and troubleshooting. Strong knowledge of Hive, HDFS, Pig, Sqoop, Oozie, and YARN. Experience with Spark, Kafka, and HBase is a plus. Strong programming skills in Java, Scala, or Python. Experience with Linux shell scripting and DevOps tools (e.g., Jenkins, Git). Familiarity with cloud platforms (AWS EMR, Azure HDInsight, or GCP Dataproc) is a plus. Excellent problem-solving and communication skills.
Posted 2 days ago
2.0 - 8.0 years
13 - 17 Lacs
Bengaluru
Work from Office
Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decisionmaking and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purposeled and valuesdriven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . & Summary A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. Responsibilities Role Working alongside other developers, testers, BA s, designers and product owners and you need to be able to communicate complex technical issues and be good at asking hard questions, at the right time. Working in small teams where collaboration and relationship building is key. We are interested in people who enjoy a dynamic, rapidly changing environment and importantly who want to drive change in the organization. Work in a build it and run it environment where teams build, deploy, monitor and support their components that they own. Accountabilities To grow and be successful in the role, you will ideally bring the following Great communication skills. You are happy to work alongside a team where you talk openly and constructively about technical issues. Solid knowledge and experience in Microservice development, using technologies like preferably Node.js (including frameworks like Fastify/Molecular), ES6/TypeScript Experience in software and Microservice design, familiar with design patterns and best practices API development and integration experience using REST/JSON, Kafka, message queues Experience with API service testing, such as unit, integration, acceptance, TDD / BDD, mocking and stubbing Solid DevOps knowledge including Configuring continuous integration, deployment, and delivery tools like Jenkins, or GitLab Cl Containerbased development using platforms like Docker, Kubernetes, and OpenShift Instrumenting monitoring and logging of applications Experience working with Microservices on AWS (EKS, Codefresh, GitHub Actions). Mandatory skill sets Must have knowledge, skills and experiences Strong understanding of CI/CD pipelines and Infrastructure as code principles such as Terraform. Experience with CI/CD tooling such as GitHub, Jenkins, Codefresh , Docker, Kubernetes. Experienced in building RestFul API s using Java (Springboot). Experienced in AWS development environment and ecosystem Cloud native and digital solutioning leveraging emerging technologies incl. containers, serverless, data, API and Microservices etc Experience with measuring, analysing, monitoring, and optimizing cloud performance including cloud system reliability and availability Understanding of storage solutions, networking, and security. Strong Familiarity with cloud platform specific Well Architected Frameworks Production experience of running services in Kubernetes. Preferred skill sets Good to have knowledge, skills and experiences Solid DevOps knowledge including o Configuring continuous integration, deployment, and delivery tools like Jenkins, or GitLab Cl o Containerbased development using platforms like Docker, Kubernetes, and OpenShift o Instrumenting monitoring and logging of applications Experience working with Microservices on AWS (EKS, Codefresh, GitHub Actions). Years of experience required Experience 7 to 8 years (23 years relevant) Education qualification BE, B.Tech, ME, M,Tech, MBA, MCA (60% above) Education Degrees/Field of Study required Bachelor of Engineering, Master of Business Administration, Master of Engineering Degrees/Field of Study preferred Required Skills Power BI Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Airflow, Apache Hadoop, Azure Data Factory, Communication, Creativity, Data Anonymization, Data Architecture, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Databricks Unified Data Analytics Platform, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling, Data Pipeline {+ 27 more} No
Posted 2 days ago
18.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Role: Enterprise Architect Grade: VP Location: Pune / Mumbai/Chennai Experience: 18+ Years Organization: Intellect Design Arena Ltd. www.intellectdesign.com About the Role: We are looking for a senior Enterprise Architect with strong leadership and deep technical expertise to define and evolve the architecture strategy for iGTB , our award-winning transaction banking platform. The ideal candidate will have extensive experience architecting large-scale, cloud-native enterprise applications within the BFSI domain , and will be responsible for driving innovation, ensuring engineering excellence, and aligning architecture with evolving business needs. Mandatory Skills: Cloud-native architecture Microservices-based systems PostgreSQL, Apache Kafka, ActiveMQ Spring Boot / Spring Cloud, Angular Strong exposure to BFSI domain Key Responsibilities: Architectural Strategy & Governance: Define and maintain enterprise architecture standards and principles across iGTB product suites. Set up governance structures to ensure compliance across product lines. Technology Leadership: Stay updated on emerging technologies; assess and recommend adoption to improve scalability, security, and performance. Tooling & Automation: Evaluate and implement tools to improve developer productivity, code quality, and application reliability—including automation across testing, deployment, and monitoring. Architecture Evangelism: Drive adoption of architecture guidelines and tools across engineering teams through mentorship, training, and collaboration. Solution Oversight: Participate in the design of individual modules to ensure technical robustness and adherence to enterprise standards. Performance & Security: Oversee performance benchmarking and security assessments. Engage with third-party labs for certification as needed. Customer Engagement: Represent architecture in pre-sales, CXO-level interactions, and post-production engagements to demonstrate the product's technical superiority. Troubleshooting & Continuous Improvement: Support teams in resolving complex technical issues. Capture learnings and feed them back into architectural best practices. Automation Vision: Lead the end-to-end automation charter for iGTB—across code quality, CI/CD, testing, monitoring, and release management. Profile Requirements: 18+ years of experience in enterprise and solution architecture roles, preferably within BFSI or fintech Proven experience with mission-critical, scalable, and secure systems Strong communication and stakeholder management skills, including CXO interactions Demonstrated leadership in architecting complex enterprise products and managing teams of architects Ability to blend technical depth with business context to drive decisions Passion for innovation, engineering excellence, and architectural rigor Show more Show less
Posted 2 days ago
7.0 - 12.0 years
5 - 13 Lacs
Pune
Hybrid
So, what’s the role all about? NICE APA is a comprehensive platform that combines Robotic Process Automation, Desktop Automation, Desktop Analytics, AI and Machine Learning solutions as Neva Discover NICE APA is more than just RPA, it's a full platform that brings together automation, analytics, and AI to enhance both front-office and back-office operations. It’s widely used in industries like banking, insurance, telecom, healthcare, and customer service We are seeking a Senior/Specialist Technical Support Engineer with a strong understanding of RPA applications and exceptional troubleshooting skills. The ideal candidate will have hands-on experience in Application Support, the ability to inspect and analyze RPA solutions and Application Server (e.g., Tomcat, Authentication, certificate renewal), and a solid understanding of RPA deployments in both on-premises and cloud-based environments (such as AWS). You should be comfortable supporting hybrid RPA architectures, handling bot automation, licensing, and infrastructure configuration in various environments. Familiarity with cloud-native services used in automation (e.g., AMQ queues, storage, virtual machines, containers) is a plus. Additionally, you’ll need a working knowledge of underlying databases and query optimization to assist with performance and integration issues. You will be responsible for diagnosing and resolving technical issues, collaborating with development and infrastructure teams, contributing to documentation and knowledge bases, and ensuring a seamless and reliable customer experience across multiple systems and platforms How will you make an impact? Interfacing with various R&D groups, Customer Support teams, Business Partners and Customers Globally to address and resolve product issues. Maintain quality and on-going internal and external communication throughout your investigation. Provide high level of support and minimize R&D escalations. Prioritize daily missions/cases and mange critical issues and situations. Contribute to the Knowledge Base, document troubleshooting and problem resolution steps and participate in Educating/Mentoring other support engineers. Willing to perform on call duties as required. Excellent problem-solving skills with the ability to analyze complex issues and implement effective solutions. Good communication skills with the ability to interact with technical and non-technical stakeholders. Have you got what it takes? Minimum of 8 to 12 years of experience in supporting global enterprise customers. Monitor, troubleshoot, and maintain RPA bots in production environments. Monitor, troubleshoot, system performance, application health, and resource usage using tools like Prometheus, Grafana, or similar Data Analytics - Analyze trends, patterns, and anomalies in data to identify product bugs Familiarity with ETL processes and data pipelines - Advantage Provide L1/L2/L3 support for RPA application, ensuring timely resolution of incidents and service requests Familiarity applications running on Linux-based Kubernetes clusters Troubleshoot and resolve incidents related to pods, services, and deployments Provide technical support for applications running on both Windows and Linux platforms, including troubleshooting issues, diagnosing problems, and implementing solutions to ensure optimal performance. Familiarity with Authentication methods like WinSSO and SAML. Knowledge in Windows/Linux Hardening like TLS enforcement, Encryption Enforcement, Certificate Configuration Working and Troubleshooting knowledge in Apache Software components like Tomcat, Apache and ActiveMQ. Working and Troubleshooting knowledge in SVN/Version Control applications Knowledge in DB schema, structure, SQL queries (DML, DDL) and troubleshooting Collect and analyze logs from servers, network devices, applications, and security tools to identify Environment/Application issues. Knowledge in terminal server (Citrix)- advantage Basic understanding on AWS Cloud systems. Network troubleshooting skills (working with different tools) Certification in RPA platforms and working knowledge in RPA application development/support – advantage. NICE Certification - Knowledge in RTI/RTS/APA products – Advantage Integrate NICE's applications with customers on-prem and cloud-based 3rd party tools and applications to ingest/transform/store/validate data. Shift- 24*7 Rotational Shift (include night shift) Other Required Skills: Excellent verbal and written communication skills Strong troubleshooting and problem-solving skills. Self-motivated and directed, with keen attention to details. Team Player - ability to work well in a team-oriented, collaborative environment. Enjoy NICE-FLEX! At NICE, we work according to the NICE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere. Requisition ID: 7326 Reporting into: Tech Manager Role Type: Individual Contributor
Posted 2 days ago
6.0 - 9.0 years
4 - 9 Lacs
Pune
Hybrid
So, what’s the role all about? NICE APA is a comprehensive platform that combines Robotic Process Automation, Desktop Automation, Desktop Analytics, AI and Machine Learning solutions as Neva Discover NICE APA is more than just RPA, it's a full platform that brings together automation, analytics, and AI to enhance both front-office and back-office operations. It’s widely used in industries like banking, insurance, telecom, healthcare, and customer service We are seeking a Senior/Specialist Technical Support Engineer with a strong understanding of RPA applications and exceptional troubleshooting skills. The ideal candidate will have hands-on experience in Application Support, the ability to inspect and analyze RPA solutions and Application Server (e.g., Tomcat, Authentication, certificate renewal), and a solid understanding of RPA deployments in both on-premises and cloud-based environments (such as AWS). You should be comfortable supporting hybrid RPA architectures, handling bot automation, licensing, and infrastructure configuration in various environments. Familiarity with cloud-native services used in automation (e.g., AMQ queues, storage, virtual machines, containers) is a plus. Additionally, you’ll need a working knowledge of underlying databases and query optimization to assist with performance and integration issues. You will be responsible for diagnosing and resolving technical issues, collaborating with development and infrastructure teams, contributing to documentation and knowledge bases, and ensuring a seamless and reliable customer experience across multiple systems and platforms How will you make an impact? Interfacing with various R&D groups, Customer Support teams, Business Partners and Customers Globally to address and resolve product issues. Maintain quality and on-going internal and external communication throughout your investigation. Provide high level of support and minimize R&D escalations. Prioritize daily missions/cases and mange critical issues and situations. Contribute to the Knowledge Base, document troubleshooting and problem resolution steps and participate in Educating/Mentoring other support engineers. Willing to perform on call duties as required. Excellent problem-solving skills with the ability to analyze complex issues and implement effective solutions. Good communication skills with the ability to interact with technical and non-technical stakeholders. Have you got what it takes? Minimum of 5 to 7 years of experience in supporting global enterprise customers. Monitor, troubleshoot, and maintain RPA bots in production environments. Monitor, troubleshoot, system performance, application health, and resource usage using tools like Prometheus, Grafana, or similar Data Analytics - Analyze trends, patterns, and anomalies in data to identify product bugs Familiarity with ETL processes and data pipelines - Advantage Provide L1/L2/L3 support for RPA application, ensuring timely resolution of incidents and service requests Familiarity applications running on Linux-based Kubernetes clusters Troubleshoot and resolve incidents related to pods, services, and deployments Provide technical support for applications running on both Windows and Linux platforms, including troubleshooting issues, diagnosing problems, and implementing solutions to ensure optimal performance. Familiarity with Authentication methods like WinSSO and SAML. Knowledge in Windows/Linux Hardening like TLS enforcement, Encryption Enforcement, Certificate Configuration Working and Troubleshooting knowledge in Apache Software components like Tomcat, Apache and ActiveMQ. Working and Troubleshooting knowledge in SVN/Version Control applications Knowledge in DB schema, structure, SQL queries (DML, DDL) and troubleshooting Collect and analyze logs from servers, network devices, applications, and security tools to identify Environment/Application issues. Knowledge in terminal server (Citrix)- advantage Basic understanding on AWS Cloud systems. Network troubleshooting skills (working with different tools) Certification in RPA platforms and working knowledge in RPA application development/support – advantage. NICE Certification - Knowledge in RTI/RTS/APA products – Advantage Integrate NICE's applications with customers on-prem and cloud-based 3rd party tools and applications to ingest/transform/store/validate data. Shift- 24*7 Rotational Shift (include night shift) Other Required Skills: Excellent verbal and written communication skills Strong troubleshooting and problem-solving skills. Self-motivated and directed, with keen attention to details. Team Player - ability to work well in a team-oriented, collaborative environment. Enjoy NICE-FLEX! At NICE, we work according to the NICE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere. Requisition ID: 7556 Reporting into: Tech Manager Role Type: Individual Contributor
Posted 2 days ago
4.0 - 9.0 years
6 - 11 Lacs
Gurugram
Work from Office
Company: Mercer Description: We are seeking a talented individual to join our Technology team at Mercer. This role will be based in Gurugram. This is a hybrid role that has a requirement of working at least three days a week in the office. Senior Devops Engineer We are looking for an ideal candidate with minimum 4 years of experience in Devops. The candidate should have strong and deep understanding of Amazon Web Services (AWS) & Devops tools like Terraform, Ansible, Jenkins. LocationGurgaon Functional AreaEngineering Education QualificationGraduate/ Postgraduate Experience4-6 Years We will count on you to: Deploy infrastructure on AWS cloud using Terraform Deploy updates and fixes Build tools to reduce occurrence of errors and improve customer experience Perform root cause analysis of production errors and resolve technical issues Develop scripts to automation Troubleshooting and maintenance What you need to have: 4+ years of technical experience in devops area. Knowledge of the following technologies and applications: AWS Terraform Linux Administration, Shell Script Ansible CI ServerJenkins Apache/Nginx/Tomcat Good to have Experience in following technologies: Python What makes you stand out: Excellent verbal and written communication skills, comfortable interfacing with business users Good troubleshooting and technical skills Able to work independently Why join our team: We help you be your best through professional development opportunities, interesting work and supportive leaders. We foster a vibrant and inclusive culture where you can work with talented colleagues to create new solutions and have impact for colleagues, clients and communities. Our scale enables us to provide a range of career opportunities, as well as benefits and rewards to enhance your well-being. Mercer, a business ofMarsh McLennan (NYSEMMC),is a global leader in helping clients realize their investment objectives, shape the future of work and enhance health and retirement outcomesfor their people. Marsh McLennan is a global leader in risk, strategy and people, advising clients in 130 countries across four businessesMarsh, Guy Carpenter, Mercer and Oliver Wyman. With annual revenue of $23 billion and more than 85,000 colleagues, Marsh McLennan helps build the confidence to thrive through the power of perspective. For more information, visit mercer.com, or follow on LinkedIn and X. Mercer Assessments business, one of the fastest-growing verticals within the Mercer brand, is a leading global provider of talent measurement and assessment solutions. As part of Mercer, the worlds largest HR consulting firm and a wholly owned subsidiary of Marsh McLennanwe are dedicated to delivering talent foresight that empowers organizations to make informed, critical people decisions. Leveraging a robust, cloud-based assessment platform, Mercer Assessments partners with over 6,000 corporations, 31 sector skill councils, government agencies, and more than 700 educational institutions across 140 countries. Our mission is to help organizations build high-performing teams through effective talent acquisition, development, and workforce transformation strategies. Our research-backed assessments, advanced technology, and comprehensive analytics deliver transformative outcomes for both clients and their employees. We specialize in designing tailored assessment solutions across the employee lifecycle, including pre-hire evaluations, skills assessments, training and development, certification exams, competitions and more. At Mercer Assessments, we are committed to enhancing the way organizations identify, assess, and develop talent. By providing actionable talent foresight, we enable our clients to anticipate future workforce needs and make strategic decisions that drive sustainable growth and innovation. Mercer, a business of Marsh McLennan (NYSEMMC), is a global leader in helping clients realize their investment objectives, shape the future of work and enhance health and retirement outcomes for their people. Marsh McLennan is a global leader in risk, strategy and people, advising clients in 130 countries across four businessesMarsh, Guy Carpenter, Mercer and Oliver Wyman. With annual revenue of $24 billion and more than 90,000 colleagues, Marsh McLennan helps build the confidence to thrive through the power of perspective. For more information, visit mercer.com, or follow on LinkedIn and X. Marsh McLennan is committed to embracing a diverse, inclusive and flexible work environment. We aim to attract and retain the best people and embrace diversity of age, background, caste, disability, ethnic origin, family duties, gender orientation or expression, gender reassignment, marital status, nationality, parental status, personal or social status, political affiliation, race, religion and beliefs, sex/gender, sexual orientation or expression, skin color, or any other characteristic protected by applicable law. Marsh McLennan is committed to hybrid work, which includes the flexibility of working remotely and the collaboration, connections and professional development benefits of working together in the office. All Marsh McLennan colleagues are expected to be in their local office or working onsite with clients at least three days per week. Office-based teams will identify at least one anchor day per week on which their full team will be together in person.
Posted 2 days ago
3.0 - 8.0 years
6 - 14 Lacs
Gurugram
Work from Office
The ideal candidate will have strong expertise in Python, Apache Spark, and Databricks, along with experience in machine learning Data Engineer
Posted 2 days ago
18.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Role: Enterprise Architect Grade: VP Location: Pune / Mumbai/Chennai Experience: 18+ Years Organization: Intellect Design Arena Ltd. www.intellectdesign.com About the Role: We are looking for a senior Enterprise Architect with strong leadership and deep technical expertise to define and evolve the architecture strategy for iGTB , our award-winning transaction banking platform. The ideal candidate will have extensive experience architecting large-scale, cloud-native enterprise applications within the BFSI domain , and will be responsible for driving innovation, ensuring engineering excellence, and aligning architecture with evolving business needs. Mandatory Skills: Cloud-native architecture Microservices-based systems PostgreSQL, Apache Kafka, ActiveMQ Spring Boot / Spring Cloud, Angular Strong exposure to BFSI domain Key Responsibilities: Architectural Strategy & Governance: Define and maintain enterprise architecture standards and principles across iGTB product suites. Set up governance structures to ensure compliance across product lines. Technology Leadership: Stay updated on emerging technologies; assess and recommend adoption to improve scalability, security, and performance. Tooling & Automation: Evaluate and implement tools to improve developer productivity, code quality, and application reliability—including automation across testing, deployment, and monitoring. Architecture Evangelism: Drive adoption of architecture guidelines and tools across engineering teams through mentorship, training, and collaboration. Solution Oversight: Participate in the design of individual modules to ensure technical robustness and adherence to enterprise standards. Performance & Security: Oversee performance benchmarking and security assessments. Engage with third-party labs for certification as needed. Customer Engagement: Represent architecture in pre-sales, CXO-level interactions, and post-production engagements to demonstrate the product's technical superiority. Troubleshooting & Continuous Improvement: Support teams in resolving complex technical issues. Capture learnings and feed them back into architectural best practices. Automation Vision: Lead the end-to-end automation charter for iGTB—across code quality, CI/CD, testing, monitoring, and release management. Profile Requirements: 18+ years of experience in enterprise and solution architecture roles, preferably within BFSI or fintech Proven experience with mission-critical, scalable, and secure systems Strong communication and stakeholder management skills, including CXO interactions Demonstrated leadership in architecting complex enterprise products and managing teams of architects Ability to blend technical depth with business context to drive decisions Passion for innovation, engineering excellence, and architectural rigor Show more Show less
Posted 2 days ago
5.0 - 6.0 years
55 - 60 Lacs
Pune
Work from Office
At Capgemini Invent, we believe difference drives change. As inventive transformation consultants, we blend our strategic, creative and scientific capabilities,collaborating closely with clients to deliver cutting-edge solutions. Join us to drive transformation tailored to our client's challenges of today and tomorrow. Informed and validated by science and data. Superpowered by creativity and design. All underpinned by technology created with purpose. Data engineers are responsible for building reliable and scalable data infrastructure that enables organizations to derive meaningful insights, make data-driven decisions, and unlock the value of their data assets. - Grade Specific The role support the team in building and maintaining data infrastructure and systems within an organization. Skills (competencies) Ab Initio Agile (Software Development Framework) Apache Hadoop AWS Airflow AWS Athena AWS Code Pipeline AWS EFS AWS EMR AWS Redshift AWS S3 Azure ADLS Gen2 Azure Data Factory Azure Data Lake Storage Azure Databricks Azure Event Hub Azure Stream Analytics Azure Sunapse Bitbucket Change Management Client Centricity Collaboration Continuous Integration and Continuous Delivery (CI/CD) Data Architecture Patterns Data Format Analysis Data Governance Data Modeling Data Validation Data Vault Modeling Database Schema Design Decision-Making DevOps Dimensional Modeling GCP Big Table GCP BigQuery GCP Cloud Storage GCP DataFlow GCP DataProc Git Google Big Tabel Google Data Proc Greenplum HQL IBM Data Stage IBM DB2 Industry Standard Data Modeling (FSLDM) Industry Standard Data Modeling (IBM FSDM)) Influencing Informatica IICS Inmon methodology JavaScript Jenkins Kimball Linux - Redhat Negotiation Netezza NewSQL Oracle Exadata Performance Tuning Perl Platform Update Management Project Management PySpark Python R RDD Optimization SantOs SaS Scala Spark Shell Script Snowflake SPARK SPARK Code Optimization SQL Stakeholder Management Sun Solaris Synapse Talend Teradata Time Management Ubuntu Vendor Management Capgemini is a global business and technology transformation partner, helping organizations to accelerate their dual transition to a digital and sustainable world, while creating tangible impact for enterprises and society. It is a responsible and diverse group of 340,000 team members in more than 50 countries. With its strong over 55-year heritage, Capgemini is trusted by its clients to unlock the value of technology to address the entire breadth of their business needs. It delivers end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fuelled by its market leading capabilities in AI, cloud and data, combined with its deep industry expertise and partner ecosystem. The Group reported 2023 global revenues of "22.5 billion.
Posted 2 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Apache is a widely used software foundation that offers a range of open-source software solutions. In India, the demand for professionals with expertise in Apache tools and technologies is on the rise. Job seekers looking to pursue a career in Apache-related roles have a plethora of opportunities in various industries. Let's delve into the Apache job market in India to gain a better understanding of the landscape.
These cities are known for their thriving IT sectors and see a high demand for Apache professionals across different organizations.
The salary range for Apache professionals in India varies based on experience and skill level. - Entry-level: INR 3-5 lakhs per annum - Mid-level: INR 6-10 lakhs per annum - Experienced: INR 12-20 lakhs per annum
In the Apache job market in India, a typical career path may progress as follows: 1. Junior Developer 2. Developer 3. Senior Developer 4. Tech Lead 5. Architect
Besides expertise in Apache tools and technologies, professionals in this field are often expected to have skills in: - Linux - Networking - Database Management - Cloud Computing
As you embark on your journey to explore Apache jobs in India, it is essential to stay updated on the latest trends and technologies in the field. By honing your skills and preparing thoroughly for interviews, you can position yourself as a competitive candidate in the Apache job market. Stay motivated, keep learning, and pursue your dream career with confidence!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.