Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Description Mandatory skills – Strong Linux skills – Good knowledge of AWS and GCP cloud – Good knowledge of Terraform, Java and shell scripting – Good understanding of Kafka, Zookeeper, Hadoop, HBase, Spark and Hive – Good understanding of Elasticsearch – knowledge of Aerospike would be an added advantage – should have worked on setting up and managing Big data platforms on AWS or GCP cloud Good to have skills Knowledge of Druid, Airflow and Tableau Requirements AWS Google Cloud Terraform Shell-Scripting Hadoop Apache Spark Hive Job responsibilities Mandatory Skills – Strong Linux skills – Good knowledge of AWS and GCP cloud – Good knowledge of Terraform, Java and shell scripting – Good understanding of Kafka, Zookeeper, Hadoop, HBase, Spark and Hive – Good understanding of Elasticsearch – knowledge of Aerospike would be an added advantage – should have worked on setting up and managing Big data platforms on AWS or GCP cloud What we offer Culture of caring. At GlobalLogic, we prioritize a culture of caring. Across every region and department, at every level, we consistently put people first. From day one, you’ll experience an inclusive culture of acceptance and belonging, where you’ll have the chance to build meaningful connections with collaborative teammates, supportive managers, and compassionate leaders. Learning and development. We are committed to your continuous learning and development. You’ll learn and grow daily in an environment with many opportunities to try new things, sharpen your skills, and advance your career at GlobalLogic. With our Career Navigator tool as just one example, GlobalLogic offers a rich array of programs, training curricula, and hands-on opportunities to grow personally and professionally. Interesting & meaningful work. GlobalLogic is known for engineering impact for and with clients around the world. As part of our team, you’ll have the chance to work on projects that matter. Each is a unique opportunity to engage your curiosity and creative problem-solving skills as you help clients reimagine what’s possible and bring new solutions to market. In the process, you’ll have the privilege of working on some of the most cutting-edge and impactful solutions shaping the world today. Balance and flexibility. We believe in the importance of balance and flexibility. With many functional career areas, roles, and work arrangements, you can explore ways of achieving the perfect balance between your work and life. Your life extends beyond the office, and we always do our best to help you integrate and balance the best of work and life, having fun along the way! High-trust organization. We are a high-trust organization where integrity is key. By joining GlobalLogic, you’re placing your trust in a safe, reliable, and ethical global company. Integrity and trust are a cornerstone of our value proposition to our employees and clients. You will find truthfulness, candor, and integrity in everything we do. About GlobalLogic GlobalLogic, a Hitachi Group Company, is a trusted digital engineering partner to the world’s largest and most forward-thinking companies. Since 2000, we’ve been at the forefront of the digital revolution – helping create some of the most innovative and widely used digital products and experiences. Today we continue to collaborate with clients in transforming businesses and redefining industries through intelligent products, platforms, and services. Show more Show less
Posted 2 months ago
5.0 years
0 Lacs
Pune, Maharashtra, India
Remote
Who We Are Addepar is a global technology and data company that helps investment professionals provide the most informed, precise guidance for their clients. Hundreds of thousands of users have entrusted Addepar to empower smarter investment decisions and better advice over the last decade. With client presence in more than 50 countries, Addepar’s platform aggregates portfolio, market and client data for over $7 trillion in assets. Addepar’s open platform integrates with more than 100 software, data and services partners to deliver a complete solution for a wide range of firms and use cases. Addepar embraces a global flexible workforce model with offices in Silicon Valley, New York City, Salt Lake City, Chicago, London, Edinburgh, Pune, and Dubai. The Role We are currently seeking a Senior Software Engineer to join our Platform Services team! The Platform Services team works on distributed services that are used across the entire Addepar engineering stack. We write critical, performance sensitive code that the organization depends on when delivering customer-facing applications. As a member of the team you will be within a team of talented engineers and will design, implement, and roll out systems instrumental in Addepar's growth and global expansion. Our Areas Of Focus And Expertise Include Distributed systems (microservices, sharded computations) Interprocess messaging and coordination (Kafka, Zookeeper, service meshes like Istio) Observability (distributed tracing, logging, performance and application monitoring) Build systems, test systems, cloud-native deployment suites Who You Are 5+ years of backend software engineering experience Proficiency in object-oriented languages such as Java Proficiency with relational and non-relational datastores Proficiency in CI/CD, monitoring, and logging systems Expertise in any of the above technical requirements is a huge plus! Our Values Act Like an Owner - Think and operate with intention, purpose and care. Own outcomes. Build Together - Collaborate to unlock the best solutions. Deliver lasting value. Champion Our Clients - Exceed client expectations. Our clients’ success is our success. Drive Innovation - Be bold and unconstrained in problem solving. Transform the industry. Embrace Learning - Engage our community to broaden our perspective. Bring a growth mindset. In addition to our core values, Addepar is proud to be an equal opportunity employer. We seek to bring together diverse ideas, experiences, skill sets, perspectives, backgrounds and identities to drive innovative solutions. We commit to promoting a welcoming environment where inclusion and belonging are held as a shared responsibility. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation. PHISHING SCAM WARNING: Addepar is among several companies recently made aware of a phishing scam involving con artists posing as hiring managers recruiting via email, text and social media. The imposters are creating misleading email accounts, conducting remote “interviews,” and making fake job offers in order to collect personal and financial information from unsuspecting individuals. Please be aware that no job offers will be made from Addepar without a formal interview process. Additionally, Addepar will not ask you to purchase equipment or supplies as part of your onboarding process. If you have any questions, please reach out to TAinfo@addepar.com. Show more Show less
Posted 2 months ago
3.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About the role Refer you will be responsible section You will be responsible for We are seeking high-performing developers to work on re-platforming an on-premise digital wallet into a set of microservices. Developers will be expected to work on maintaining the legacy product and deliver business-driven changes alongside rebuild work. The candidate will be expected to be up to date with modern development technologies and techniques. You will be expected to have good communication skills and to challenge; where appropriate what how and why of code/designs to ensure the optimal end solution. -Good knowledge and working experience on Big data Hadoop ecosystem & distributed systems -Strong understanding of underlying Hadoop Architectural concepts and distributed computing paradigms -The Data Engineer would work with a highly efficient team of data scientists and data engineers -Excellent programming skills in Scala/Spark & shell scripting -Prior experience in using technologies like Oozie Hive Spark HBase Nifi Sqoop and Hbase and Zookeeper -Good knowledge on engineering practices like CI/CD Jenkins Maven & GIT Hub -Good experience on Kafka and Schema registry -Good exposure on cloud computing (Azure/AWS) -Aware of different design patterns optimization techniques locking principles -Should know how to scale systems and optimize performance using caching -Should have worked on batch and streaming pipelines -Implement end-to-end Hadoop ecosystem components and accompanying frameworks with minimal assistance. -Good understanding of NFRs ( scalability reliability maintainability usability fault-tolerant systems) -Drive out features via appropriate test frameworks. -Translate small behaviour requirements into tasks & code. -Develop high-quality code that can lead to rapid delivery. Ruthlessly pursuing continuous integration and delivery. -Commit code early and often demonstrating my understanding of version control & branching strategies. -Apply patterns for integration (events/services) and Identify patterns in code and refactor the code towards them where it increases understanding and/or maintainability with minimal guidance. -Follow the best practices of continuous BDD/TDD/Performance/Security/Smoke testing. -Work effectively with my product stakeholders to communicate and translate their needs into improvements in my product. -Certifications like Hortonworks / Cloudera Developer Certifications added an advantage -Excellent communication and presentation skills should demonstrate thought leadership and influence people -Strong computer science fundamentals logical thinking and reasoning -Participate in team ceremonies. -Support production systems resolve incidents and perform root cause analysis -Debug code and support/maintain the software solution. You will need Refer you will be responsible section Whats in it for you? At Tesco, we are committed to providing the best for you. As a result, our colleagues enjoy a unique, differentiated, market- competitive reward package, based on the current industry practices, for all the work they put into serving our customers, communities and planet a little better every day. Our Tesco Rewards framework consists of pillars - Fixed Pay, Incentives, and Benefits. Total Rewards offered at Tesco is determined by four principles - simple, fair, competitive, and sustainable. Salary - Your fixed pay is the guaranteed pay as per your contract of employment. Leave & Time-off - Colleagues are entitled to 30 days of leave (18 days of Earned Leave, 12 days of Casual/Sick Leave) and 10 national and festival holidays, as per the company's policy. Making Retirement Tension-FreeSalary - In addition to Statutory retirement beneets, Tesco enables colleagues to participate in voluntary programmes like NPS and VPF. Health is Wealth - Tesco promotes programmes that support a culture of health and wellness including insurance for colleagues and their family. Our medical insurance provides coverage for dependents including parents or in-laws. Mental Wellbeing - We offer mental health support through self-help tools, community groups, ally networks, face-to-face counselling, and more for both colleagues and dependents. Financial Wellbeing - Through our financial literacy partner, we offer one-to-one financial coaching at discounted rates, as well as salary advances on earned wages upon request. Save As You Earn (SAYE) - Our SAYE programme allows colleagues to transition from being employees to Tesco shareholders through a structured 3-year savings plan. Physical Wellbeing - Our green campus promotes physical wellbeing with facilities that include a cricket pitch, football field, badminton and volleyball courts, along with indoor games, encouraging a healthier lifestyle. About Us Tesco in Bengaluru is a multi-disciplinary team serving our customers, communities, and planet a little better every day across markets. Our goal is to create a sustainable competitive advantage for Tesco by standardising processes, delivering cost savings, enabling agility through technological solutions, and empowering our colleagues to do even more for our customers. With cross-functional expertise, a wide network of teams, and strong governance, we reduce complexity, thereby offering high-quality services for our customers. Tesco in Bengaluru, established in 2004 to enable standardisation and build centralised capabilities and competencies, makes the experience better for our millions of customers worldwide and simpler for over 3,30,000 colleagues Tesco Technology Today, our Technology team consists of over 5,000 experts spread across the UK, Poland, Hungary, the Czech Republic, and India. In India, our Technology division includes teams dedicated to Engineering, Product, Programme, Service Desk and Operations, Systems Engineering, Security & Capability, Data Science, and other roles. At Tesco, our retail platform comprises a wide array of capabilities, value propositions, and products, essential for crafting exceptional retail experiences for our customers and colleagues across all channels and markets. This platform encompasses all aspects of our operations – from identifying and authenticating customers, managing products, pricing, promoting, enabling customers to discover products, facilitating payment, and ensuring delivery. By developing a comprehensive Retail Platform, we ensure that as customer touchpoints and devices evolve, we can consistently deliver seamless experiences. This adaptability allows us to respond flexibly without the need to overhaul our technology, thanks to the creation of capabilities we have built. Show more Show less
Posted 2 months ago
7.0 - 14.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Experience Range: 7-14 Years Domain: BFSI Job Responsibilities: Working in an agile framework, developing DevOps based Automation solutions like building CI/CD pipeline, automated deployments & verification, containerization, blue/green deployments etc. Regularly reviewing processes and identify areas for optimization to improve the whole lifecycle of our products – deployment, operation. Scaling systems safely through mechanisms like automation and observability Writing clear and detailed documentation of DevOps processes and best practices for department reference and knowledge sharing. Working with IT colleagues globally across several front / middle / back-end / operation teams ensuring seamless integration of solutions. Skills: Strong experience in DevOps covering Release Automation by using infra of morgan Blue/Green Deployment Observability – OTel, Grafana, Loki, Prometheus/Cortex, Tempo Exposure to these technologies API and message-based architecture Load balancer, ZooKeeper (Basic knowledge) Docker / Podman / Kubernetes Jenkins pipeline / Github Actions Cloud offerings such as Azure and AWS Configuration management tools such as Ansible, Chef (desirable) Linux (Intermediate) Knowledge in scripting languages (eg. Python, Groovy, Bash, PowerShell) Strong scripting experience in languages like Python, Groovy, Bash, PowerShell. Infrastructure as Code (IaC) Knowledge in any modern programming languages (eg. Java / C# / Scala) are desirable Excellent communication, teamwork and interpersonal skills. Must be familiar with code management tools such as git/stash Agile software development experience, preferably Scrum Strong analytical capability and problem solving skills. Equity Derivatives product knowledge is desirable. Show more Show less
Posted 2 months ago
5.0 years
0 Lacs
Lucknow, Uttar Pradesh, India
On-site
Job Description We are seeking a highly skilled and customer-focused Technical Support Engineer to join our team. This role is responsible for delivering high-quality technical support to our customers to troubleshoot complex technical issues and collaborating with cross-functional teams to ensure customer success. Technical Support Engineer is expected to provide advanced technical support on Data based Software Product to a global client base and partners. This role requires deep technical expertise, strong problem-solving skills, and the ability to communicate complex technical information effectively. The primary responsibility is to troubleshoot and resolve technical issues, support product adoption, and ensure customer satisfaction. TSE must have experience working with databases, specifically graph databases, and possess a strong background in Linux, networking, and scripting (bash/python). They work collaboratively with engineering teams to escalate and resolve complex issues when necessary (i.e. code change required, first time seeing a behavior). Roles and Responsibilities Respond to customer inquiries and provide in-depth technical support via multiple communication channels. Collaborate with core engineering and solution engineering teams to diagnose and resolve complex technical problems. Create and maintain public documentation, internal knowledge base articles, and FAQs. Monitoring and meeting SLAs. Timely triage varying issues based on error messages, log files, threads dumps, stack traces, sample code, and other available data points. Efficiently troubleshoot cluster issues across multiple servers, data centers, and regions, in a variety of clouds (AWS, Azure, GCP, etc), virtual, and bare metal environments. Candidate to work during EMEA time zone (2PM to 10 PM shift) Requirements Must Have Skills Education: B.Tech in computer engineering, Information Technology, or related field. Experience: GraphDB experience is must 5+ years of experience in a Technical Support Role p on Data based Software Product at least L3 level. Linux Expertise: 4+ years with in-depth understanding of Linux, including filesystem, process management, memory management, networking, and security. Graph Databases: 3+ years of experience with Neo4j, or similar graph database systems. SQL Expertise: 3+ years of experience in SQL for database querying, performance tuning, and debugging. Data Streaming & Processing: 2+ years hands-on experience with Kafka, Zookeeper, and Spark. Scripting & Automation: 2+ years with strong skills in Bash scripting and Python for automation, task management, and issue resolution. Containerization & Orchestration: 1+ year proficiency in Docker, Kubernetes, or other containerization technologies is essential. Monitoring & Performance Tools: Experience with Grafana, Datadog, Prometheus, or similar tools for system and performance monitoring. Networking & Load Balancing: Proficient in TCP/IP, load balancing strategies, and troubleshooting network-related issues. Web & API Technologies: Understanding of HTTP, SSL, REST APIs for debugging and troubleshooting API-related issues. Nice to have Skills Familiarity with Data Science or ML will be an edge. Experience with LDAP, SSO, OAuth authentication. Strong understanding of database internals and system architecture. Cloud certification (at least DevOps Engineer level) Show more Show less
Posted 2 months ago
6.0 years
0 Lacs
Greater Kolkata Area
On-site
Job Description for Java Developer Preferred Experience: 6 - 9 years Work Location: Kolkata About RS Software RS Software builds global, national and enterprise payment platforms, and has presence in four continents. The product suite combines with knowledge systems built over 30 years, delivering mission-critical payment solutions that combine innovation and entrepreneurship to create the new gold standard for digital payments. With approximately 40% of the global digital payment volumes in 2021 processed on platforms built by RS Software, the vision is to deliver Payments at the Speed of Thought. RS Software is focused on the global payments modernization market, providing large-scale, high-performance payment systems, serving central Infrastructures, financial institutions, payment network providers, payment processors and software companies providing products to the payment industry. The company's product suite offers ISO 20022 ready, open payments architecture using a cloud-based microservices framework - optimizing costs, seamless integrations with commoditized products, and accelerates the pace of adoption. The company’s solutions today are installed in 12 of the top 20 banks in India, and the four major platforms built by RS Software cumulatively process annually 350+ billion transactions world-wide, giving the company a rare track record in the payments domain. RS Software’s product suite is getting recognized in some important markets, which is creating strategic partnerships, the foundation for the company’s long-term growth. RS Software has bult India’s digital payment infrastructure, the three major payment platforms, which are transforming the lives of a billion+ people. Instant digital payment platform (UPI) Bill payment platform - Bharat Bill Payment System (BBPS) Enterprise Fraud and Risk Management (EFRM) Why RS Software? RS provides a unique experience of engaging in some world class product development and prestigious large scale payment platforms that caters to billions of people We provide opportunity to learn and develop high throughput transaction processing systems RS Software is one of the few technology and payment solutions providers where talented individuals have the opportunity to work on cutting-edge, complex, and mission-critical IT projects. We offer ample career opportunities to hardworking and skilled employees. Our Talent Management Program is specifically designed to identify the interests of each employee and match them with suitable career paths within their desired domains, allowing them to make the best possible use of their skillsets in reaching their goals. We invest in the knowledge and skill development of our employees with RS School of Payments – the industry’s most comprehensive training platform. There are three main areas of focus that the Academy and School address: current technology skills, professional development and payments domain knowledge. Our customized training program, well-defined career mapping process and comprehensive appraisal system is designed to help every employee achieve their goals. To address the challenges of relocation, we offer employees coming from other regions reimbursement for expenses associated with their moves as well as complimentary interim facilities, such as guesthouse accommodations, to ease the transition. We also assist employees with finding suitable housing. Role Overview: As a Product Team – Java Developer , you will play a key role in designing, developing, and optimizing features for our real-time fraud detection solution. You will work closely with cross-functional teams to deliver high-performance, scalable, and secure software components. Your work will directly impact our ability to detect and mitigate fraud in digital payment systems. Key Responsibilities: Design, develop, and maintain robust, scalable, and high-performance software solutions using Core Java and Spring Boot . Implement and maintain microservices architecture for our real-time fraud detection solution. Develop and integrate REST APIs to ensure seamless communication between services. Utilize Spring Boot Cloud Services to enhance the scalability and reliability of the fraud detection platform. Apply OOP (Object-Oriented Programming) concepts to create clean, modular, and maintainable code. Leverage multi-threading and data concurrency handling to build efficient, real-time fraud detection processes. Manage data access and manipulation using JDBC and JPA (Java Persistence API) . Handle FTP operations within the code to manage file transfers securely. Collaborate with the DevOps team to enhance deployment processes and maintain the reliability of our microservices. Work closely with the data science team to integrate machine learning models into the fraud detection system. Monitor and optimize system performance to handle large volumes of transactions and data efficiently. Required Skills and Experience: Strong proficiency in Core Java (preferably version 17+) and in-depth knowledge of OOP (Object-Oriented Programming) & Functional Programming (Stream & Lambda) concepts . Core Java Design patterns and anti-patterns Hands-on experience with Spring framework, Spring Boot and Spring Cloud with clear understanding of their correlations. Experience in building and consuming REST APIs . Strong understanding of microservices architecture and related frameworks. Experience with nuances of and complexities associated with multi-threaded environs & data concurrency in Java applications. Proficiency in JDBC and JPA for data persistence and management and selection criteria for choosing one of them. Expertise in RDBMS Query like CTE, window function etc. Experience with FTP handling from within code for secure data transfer. Familiarity with CORS (Cross-Origin Resource Sharing) and its implementation for secure web communications. Familiarity with Kafka, Zookeeper, Redis and NoSQL databases like Cassandra. Strong problem-solving skills and ability to work in a fast-paced, agile development environment. Excellent communication and teamwork skills, with the ability to collaborate effectively across teams. Familiarity with the conversion from functional requirement to the technical one and low-level design considerations. Familiarity with build tools like maven, gradle etc. Familiarity with common coding standards/best practices and SonarQube. Proficiency in Container (like Docker), orchestration system (like Kubernetes), Cloud platforms (like AWS/CGP/Azure etc.) Familiarity with version management tools like GIT and collaborative tools like JIRA. Preferred Skills: Experience working with real-time payment systems or fraud detection solutions. Knowledge of machine learning integration in Java-based applications. Familiarity with CI/CD pipelines and cloud deployment strategies. Understanding of security best practices in web and API development. Show more Show less
Posted 2 months ago
0 years
0 Lacs
Indore, Madhya Pradesh, India
On-site
What is your Role? You will work in a multi-functional role with a combination of expertise in System and Hadoop administration. You will work in a team that often interacts with customers on various aspects related to technical support for deployed system. You will be deputed at customer premises to assist customers for issues related to System and Hadoop administration. You will Interact with QA and Engineering team to co-ordinate issue resolution within the promised SLA to customer. What will you do? Deploying and administering Hortonworks, Cloudera, Apache Hadoop/Spark ecosystem. Installing Linux Operating System and Networking. Writing Unix SHELL/Ansible Scripting for automation. Maintaining core components such as Zookeeper, Kafka, NIFI, HDFS, YARN, REDIS, SPARK, HBASE etc. Takes care of the day-to-day running of Hadoop clusters using Ambari/Cloudera manager/Other monitoring tools, ensuring that the Hadoop cluster is up and running all the time. Maintaining HBASE Clusters and capacity planning. Maintaining SOLR Cluster and capacity planning. Work closely with the database team, network team and application teams to make sure that all the big data applications are highly available and performing as expected. Manage KVM Virtualization environment. What skills you should have? Technical Domain: Linux administration, Hadoop Infrastructure and Administration, SOLR, Configuration Management (Ansible etc). Show more Show less
Posted 2 months ago
40.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Description Oracle Database Appliance (ODA) is a converged architecture of software, compute, networking, and storage, optimized and engineered to offer performance and scale for Oracle Databases and applications. Thousands of ODA deployments are used by enterprise customers to run their business critical Oracle databases and applications. Next generation of ODA software will scale out manage a pool of ODA systems, and will be the primary infrastructure piece for ODA based private and hybrid cloud environments. It will help provision and manage compute, network, storage, and virtual machines seamlessly on a pool of ODA servers. It will also help deploy software, be it databases or applications in the private cloud. Complemented by efficient command line tools and UI, ODA private cloud based solutions will provide flexibility and high resource utilization to enterprises looking to deploy databases and applications in a secure, easy-to-manage, and cost effective private cloud environments. ODA will also be deployed in the Oracle Cloud Infrastructure (OCI) to offer a cloud@customer deployment model to customers. We are looking for self-motivated candidates who can develop robust software to manage core infrastructure pieces on ODA. An ideal candidate should have a solid systems software development background and grasp of OS, database, and clustering concepts. Career Level - IC4 Responsibilities Roles and Responsibilities Design and develop solutions for upcoming releases of ODA. Conceptualize, design, and implement new features on the ODA platform for private clouds. Maintain existing code and work with test and support teams to fix defects. Provide technical guidance to other team members. Required Skills And Experience B.E./B.Tech in Computer Science or related fields. M.S. or PhD or equivalent experience is a plus. Three to ten years of work experience. Proficiency in Java Java Concurrency: multi-threading, locking, synchronization free concurrency implementation, Java concurrency patterns, etc Java Tuning and Debugging: general Java tuning, multi-threaded performance consideration, sophisticated online debugging, heap dump analysis Implementation and integration of RESTful APIs Advanced Java Experience: hierarchical class loaders, runtime class loading, reflection APIs, use of generics in API design Java + Database: core JDBC experience, ORM persistence frameworks, resource pooling and cleanup, datatype conversion Secondary proficiency in Python is plus. Knowledge of database management systems internals is a plus Strong computer science fundamentals: data structures and algorithms Knowledge in the field of distributed systems, clustering, and high availability and specific technologies such as ZooKeeper is a big plus. Experience developing cloud solutions using OpenStack, Kubernetes, or other cloud technologies is a plus. Proficiency in Linux or another flavour or UNIX (Solaris, AIX, or HP-UX) OS Automation and Integration: scripting (shell, Python, PERL, etc.), Linux tools familiarity, OS resource management, job management Self-motivated and able to deliver projects with minimal supervision. Good oral and written communication skills. Life at Oracle and Equal Opportunity An Oracle career can span industries, roles, Countries and cultures, giving you the opportunity to flourish in new roles and innovate, while blending work life in. Oracle has thrived through 40+ years of change by innovating and operating with integrity while delivering for the top companies in almost every industry. In order to nurture the talent that makes this happen, we are committed to an inclusive culture that celebrates and values diverse insights and perspectives, a workforce that inspires thought leadership and innovation. Oracle offers a highly competitive suite of Employee Benefits designed on the principles of parity, consistency, and affordability. The overall package includes certain core elements such as Medical, Life Insurance, access to Retirement Planning, and much more. We also encourage our employees to engage in the culture of giving back to the communities where we live and do business. At Oracle, we believe that innovation starts with diversity and inclusion and to create the future we need talent from various backgrounds, perspectives, and abilities. We ensure that individuals with disabilities are provided reasonable accommodation to successfully participate in the job application, interview process, and in potential roles to perform crucial job functions. That’s why we’re committed to creating a workforce where all individuals can do their best work. It’s when everyone’s voice is heard and valued that we’re inspired to go beyond what’s been done before. Disclaimer: Oracle is an Equal Employment Opportunity Employer*. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law. About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law. Show more Show less
Posted 2 months ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description Senior Cloud Engineer Job Description We are seeking a highly skilled and driven Senior Cloud Engineer with 3+ years of experience in cloud infrastructure, automation, and software development. This role focuses on building and maintaining secure, scalable, and efficient cloud systems. The ideal candidate will have hands-on expertise in software development, infrastructure, automation and container orchestration. As a Senior Cloud Engineer, you will design and implement solutions for complex, large-scale systems. You will collaborate across teams to deliver innovative, reliable cloud infrastructure while maintaining a strong focus on scalability, security, and cost efficiency. This role offers opportunities to lead technical initiatives and continuously enhance your expertise. Minimum Qualifications Bachelor’s degree in Computer Science, Engineering, or equivalent practical experience. 3+ years of hands-on experience in software development and cloud engineering. Proficiency in one or more programming languages (Go, Python, Java, or Bash). Expertise in at least one major cloud platform (AWS, GCP, Azure, or Oracle Cloud). Strong understanding of infrastructure automation tools (Terraform, CloudFormation, Pulumi). Solid experience with container orchestration platforms (e.g., Kubernetes in EKS, GKE, or AKS). Networking expertise, including VPCs, subnets, routing, transit gateways, NAT gateways, and proxies. Experience with CI/CD pipelines using tools such as GitHub Actions, GitLab, Jenkins, or CircleCI. Familiarity with service discovery tools (e.g., Consul, Zookeeper) and secret management tools (e.g., HashiCorp Vault). Excellent communication and analytical skills to convey technical solutions clearly to diverse stakeholders. Responsibilities Design and Build Cloud Infrastructure: Develop and maintain secure, scalable cloud platforms, ensuring cloud governance and operational efficiency. Infrastructure as Code (IaC): Manage and automate cloud infrastructure using tools such as Terraform and CloudFormation to ensure consistent, repeatable deployments. Kubernetes and Container Orchestration: Deploy, monitor, and manage containerized workloads in Kubernetes environments (e.g., EKS, GKE). Cloud Security and Governance: Implement cloud governance frameworks, monitor security configurations, and manage role-based access and compliance controls. Automation and CI/CD: Build and maintain CI/CD pipelines to automate software deployment, reduce manual effort, and increase system reliability. Networking and Connectivity: Configure and troubleshoot cloud networking components, such as VPCs, transit gateways, routing, and proxies. Monitoring and Optimization: Enhance platform performance, reliability, and cost efficiency by implementing robust monitoring and optimization strategies. Collaboration: Partner with cross-functional teams to align infrastructure solutions with business needs, translating technical concepts into actionable insights. Incident Response: Troubleshoot and resolve complex cloud-related issues, ensuring minimal downtime and efficient incident management. Continuous Improvement: Stay current with advancements in cloud technologies, containerization, and platform engineering to drive continuous innovation and improvements. Why Join Us? Work on cutting-edge cloud infrastructure projects that challenge and expand your technical expertise. Be part of a collaborative, inclusive culture that values knowledge-sharing, teamwork, and innovation. Grow your career through mentoring, learning programs, and opportunities to lead impactful initiatives. Contribute to building secure, reliable, and scalable cloud platforms used at scale by leveraging modern cloud engineering practices. Show more Show less
Posted 2 months ago
3.0 - 5.0 years
15 - 20 Lacs
Pune
Work from Office
About the job Sarvaha would like to welcome Kafka Platform Engineer (or a seasoned backend engineer aspiring to move into platform architecture) with a minimum of 4 years of solid experience in building, deploying, and managing Kafka infrastructure on Kubernetes platforms. Sarvaha is a niche software development company that works with some of the best funded startups and established companies across the globe. Please visit our website at What Youll Do - Deploy and manage scalable Kafka clusters on Kubernetes using Strimzi, Helm, Terraform, and StatefulSets - Tune Kafka for performance, reliability, and cost-efficiency - Implement Kafka security: TLS, SASL, ACLs, Kubernetes Secrets, and RBAC - Automate deployments across AWS, GCP, or Azure - Set up monitoring and alerting with Prometheus, Grafana, JMX Exporter - Integrate Kafka ecosystem components: Connect, Streams, Schema Registry - Define autoscaling, resource limits, and network policies for Kubernetes workloads - Maintain CI/CD pipelines (ArgoCD, Jenkins) and container workflows You Bring - BE/BTech/ MTech (CS/IT or MCA), with an emphasis in Software Engineering - Strong foundation in the Apache Kafka ecosystem and internals (brokers, ZooKeeper/ KRaft, partitions, storage) - Proficient in Kafka setup, tuning, scaling, and topic/partition management - Skilled in managing Kafka on Kubernetes using Strimzi, Helm, Terraform - Experience with CI/CD, containerization, and GitOps workflows - Monitoring expertise using Prometheus, Grafana, JMX - Experience on EKS, GKE, or AKS preferred - Strong troubleshooting and incident response mindset - High sense of ownership and automation-first thinking - Excellent collaboration with SREs, developers, and platform teams - Clear communicator, documentation-driven, and eager to mentor/share knowledge.
Posted 2 months ago
0 years
0 Lacs
Agra, Uttar Pradesh, India
On-site
Major Accountabilities Collaborate with the CIO on application Architecture and Design of our ETL (Extract, Transform, Load) and other aspects of Data Pipelines. Our stack is built on top of the well-known Spark Ecosystem (e.g. Scala, Python, etc.) Periodically evaluate architectural landscape for efficiencies in our Data Pipelines and define current state, target state architecture and transition plans, road maps to achieve desired architectural state Conducts/leads and implements proof of concepts to prove new technologies in support of architecture vision and guiding principles (e.g. Flink) Assist in the ideation and execution of architectural principles, guidelines and technology standards that can be leveraged across the team and organization. Specially around ETL & Data Pipelines Promotes consistency between all applications leveraging enterprise automation capabilities Provide architectural consultation, support, mentoring, and guidance to project teams, e.g. architects, data scientist, developers, etc. Collaborate with the DevOps Lead on technical features Define and manage work items using Agile methodologies (Kanban, Azure boards, etc) Leads Data Engineering efforts (e.g. Scala Spark, PySpark, etc) Knowledge & Experience Experienced with Spark, Delta Lake, and Scala to work with Petabytes of data (to work with Batch and Streaming flows) Knowledge of a wide variety of open source technologies including but not limited to; NiFi, Kubernetes, Docker, Hive, Oozie, YARN, Zookeeper, PostgreSQL, RabbitMQ, Elasticsearch A strong understanding of AWS/Azure and/or technology as a service (Iaas, SaaS, PaaS) Strong verbal and written communications skills are a must, as well as the ability to work effectively across internal and external organizations and virtual teams Appreciation of building high volume, low latency systems for the API flow Core Dev skills (SOLID principles, IOC, 12-factor app, CI-CD, GIT) Messaging, Microservice Architecture, Caching (Redis), Containerization, Performance, and Load testing, REST APIs Knowledge of HTML, JavaScript frameworks (preferably Angular 2+), Typescript Appreciation of Python and C# .NET Core or Java Appreciation of global data privacy requirements and cryptography Experience in System Testing and experience of automated testing e.g. unit tests, integration tests, mocking/stubbing Relevant Industry And Other Professional Qualifications Tertiary qualifications (degree level) We are an inclusive employer and welcome applicants from all backgrounds. We pride ourselves on our commitment to Equality and Diversity and are committed to removing barriers throughout our hiring process. Key Requirements Extensive data engineering development experience (e.g., ETL), using well known stacks (e.g., Scala Spark) Experience in Technical Leadership positions (or looking to gain experience) Background software engineering The ability to write technical documentation Solid understanding of virtualization and/or cloud computing technologies (e.g., docker, Kubernetes) Experience in designing software solutions and enjoys UML and the odd sequence diagram Experience operating within an Agile environment Ability to work independently and with minimum supervision Strong project development management skills, with the ability to successfully manage and prioritize numerous time pressured analytical projects/work tasks simultaneously Able to pivot quickly and make rapid decisions based on changing needs in a fast-paced environment Works constructively with teams and acts with high integrity Passionate team player with an inquisitive, creative mindset and ability to think outside the box. Skills:- Java, Scala, Apache Spark, Spark, Hadoop and ETL Show more Show less
Posted 2 months ago
0 years
0 Lacs
Vadodara, Gujarat, India
On-site
Our Purpose Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. Title And Summary Senior Software Engineer Overview The MDES team is looking for a Senior Software Development Engineer who can develop microservices based Enterprise applications using Java J2EE stack. Also, development of Portals which would be either used by customer care, end user, customer representatives etc.. The ideal candidate is the one who is passionate about designing & developing high quality code which is highly scalable, operable & highly available Role Develop (code) Enterprise Application with quality, within schedule and within estimated efforts. Assist Lead Engineer in low level design Provide estimate for the assigned task Write and execute Unit, Integration test cases Provide accurate status of the tasks Perform peer review and mentor junior team members Comply with organizations processes. Policies and protects organization’s Intellectual property. Also, participate in organization level process improvement and knowledge sharing All About You Essential knowledge, skills & attributes Hands on experience with core Java, Spring Boot, Spring (MVC, IOC, AOP, Security), SQL, RDBMS (Oracle and PostGRES), Web-services (JSON and SOAP), Kafka, Zookeeper Hands on experience of developing microservice application & deploying them on any one of the public cloud like Google, AWS, Azure Hands on experience of using Intellij/Eclipse/My Eclipse IDE Hands on experience of writing Junit test cases, working with Maven/Ant/Gradle, GIT Knowledge of Design Patterns Experience of working with Agile methodologies. Personal attributes are strong logical and Analytical Skills, design skills, should be able to articulate and present his/her thoughts very clearly and precisely in English (written and verbal) Knowledge of Security concepts (E.g. authentication, authorization, confidentiality etc.) and protocols, their usage in enterprise application Additional/Desirable Capabilities Experience of working in Payments application Domain Hands on experience of working with tools like Mockito, JBehave, Jenkins, Bamboo, Confluence, Rally Corporate Security Responsibility All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines. R-244633 Show more Show less
Posted 2 months ago
2.0 years
0 Lacs
India
On-site
Lucidworks is leading digital transformation for some of the world's biggest retailers, financial services firms, manufacturers, and B2B commerce organizations. We believe that the core to a great digital experience starts with search and browse. Our Deep Learning technology captures user behavior and utilizes machine learning to connect people with the products, content, and information they need. Brands including American Airlines, Lenovo, Red Hat, and Cisco Systems rely on Lucidworks' suite of products to power commerce, customer service, and workplace applications that delight customers and empower employees. Lucidworks believes in the power of diversity and inclusion to help us do our best work. We are an Equal Opportunity employer and welcome talent across a full range of backgrounds, orientation, origin, and identity in an inclusive and non-discriminatory way. About The Team The technical support team leverages their extensive experience supporting large-scale Solr clusters and the Lucene/Solr ecosystem. Their day might include troubleshooting errors and attempting to fix or develop workarounds, diagnosing network and environmental issues, learning your customer's infrastructure and technologies, as well as reproducing bugs and opening Jira tickets for the engineering team. Their primary tasks are break/fix scenarios where the diagnostics quickly bring network assets back online and prevent future problems--which has a huge impact on our customers’ business. About The Role As a Search Engineer in Technical Support, you will play a critical role in helping our clients achieve success with our products. You will be responsible for assisting clients directly in resolving any technical issues they encounter, as well as answering questions about the product and feature functionality. You will work closely with internal teams such as Engineering and Customer Success to resolve a variety of issues, including product defects, performance issues, and feature requests. This role requires excellent problem-solving skills and attention to detail, strong communication abilities, and a deep understanding of search technology. Additionally, this role requires the ability to work independently and as part of a team, and being comfortable working with both technical and non-technical stakeholders. The successful candidate will demonstrate a passion for delivering an outstanding customer experience, balancing technical expertise with empathy for the customer’s needs. This role is open to candidates in India. The role expected to participate in weekend on-call rotations. Responsibilities Field incoming questions, help users configure Lucidworks Fusion and its components, and help them to understand how to use the features of the product Troubleshoot complex search issues in and around Lucene/Solr Document solutions into knowledge base articles for use by our customer base in our knowledge center Identify opportunities to provide customers with additional value through follow-on products and/or services Communicate high-value use cases and customer feedback to our Product Development and Engineering teams Collaborate across teams internally to diagnose and resolve critical issues Participating in a 24/7/365 on-call rotation, which includes weekends and holidays shifts Skills & Qualifications 2+ years of hands-on experience with Lucene/Solr or other search technologies like Elastic BS or higher in Engineering or Computer Science is preferred 3+ years professional experience in a customer facing level 2-3 tech support role Experience with technical support CRM systems (Salesforce, Zendesk etc.) Ability to clearly communicate with customers by email and phone Proficiency with Java and one or more common scripting languages (Python, Perl, Ruby, etc.) Proficiency with Unix/Linux systems (command line navigation, file system permissions, system logs and administration, scripting, networking, etc.) Exposure to other related open source projects (Mahout, Hadoop, Tika, etc.) and commercial search technologies Enterprise Search, eCommerce, and/or Business Intelligence experience Knowledge of data science and machine learning concepts Experience with cloud computing platforms (GCP, Azure, AWS, etc.) and Kubernetes Startup experience is preferred Our Stack Apache Lucene/Solr, ZooKeeper, Spark, Pulsar, Kafka, Grafana Java, Python, Linux, Kubernetes Zendesk, Jira ₹16,21,000 - ₹22,28,500 a year This salary range may include multiple levels. Your level is based on our assessment of your interview performance and experience, which you can always ask the hiring manager about to understand in more detail. Salary is just one component of Lucidworks’ total compensation package for employees. Your total rewards package may include (but is not necessarily limited to) discretionary variable bonus, top-notch medical, dental and vision coverage, a variety of voluntary benefits, generous PTO policy, various leave policies, and many other region-specific benefits. Lucidworks believes in the power of diversity and inclusion to help us do our best work. We are an Equal Opportunity employer and welcome talent across a full range of backgrounds, orientation, origin, and identity in an inclusive and non-discriminatory way. Applicants receive consideration based on the relevant talents, skills, and experiences they offer to our company. Thank you for your interest and we look forward to learning more about you. Note to third party recruiters: We appreciate your interest in our job opportunities. However, we kindly request that third-party recruiters and staffing agencies refrain from contacting us regarding these positions. We prefer to work directly with candidates and do not accept unsolicited resumes or candidate referrals from third-party recruiters or agencies. Unsolicited resumes and referrals will become the property of Lucidworks, and no fee will be paid should we hire a candidate whose resume was sent unsolicited. Show more Show less
Posted 2 months ago
4.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Profile Description We’re seeking someone to join our team as (Associate) Production Support Engineer is expected to troubleshoot platform issues/outages and code (scripting), as well as participate in a variety of architecture and design discussions. The ideal candidate is a self-motivated and collaborative team player committed to delivering solutions and has the ability to work with minimal supervision. WM_Technology Wealth Management Technology is responsible for the design, development, delivery, and support of the technical solutions behind the products and services used by the Morgan Stanley Wealth Management Business. Practice areas include: Analytics, Intelligence, & Data Technology (AIDT), Client Platforms, Core Technology Services (CTS), Financial Advisor Platforms, Global Banking Technology (GBT), Investment Solutions Technology (IST), Institutional Wealth and Corporate Solutions Technology (IWCST), Technology Delivery Management (TDM), User Experience (UX), and the CAO team. Core Platform Services Core Platform Services is responsible for driving Resiliency, Automation, Performance, Stability, and Efficiency across Wealth Management Technology. Software Production Management & Reliability Engineering This is Associate position that oversees the production environment, ensuring the operational reliability of deployed software, and implements strategies to optimize performance and minimize downtime. Morgan Stanley is an industry leader in financial services, known for mobilizing capital to help governments, corporations, institutions, and individuals around the world achieve their financial goals. At Morgan Stanley India, we support the Firm’s global businesses, with critical presence across Institutional Securities, Wealth Management, and Investment management, as well as in the Firm’s infrastructure functions of Technology, Operations, Finance, Risk Management, Legal and Corporate & Enterprise Services. Morgan Stanley has been rooted in India since 1993, with campuses in both Mumbai and Bengaluru. We empower our multi-faceted and talented teams to advance their careers and make a global impact on the business. For those who show passion and grit in their work, there’s ample opportunity to move across the businesses for those who show passion and grit in their work. Interested in joining a team that’s eager to create, innovate and make an impact on the world? Read on… What You’ll Do In The Role Operational Performance & Stability: Works with various teams to ensure that the in-scope applications/platforms are meeting performance and stability requirements. Managing Major Incidents to Mitigation/Resolution. Incident and Problem Management: Performs Post-Incident Reviews of all Major Incidents and determining Action Items required to avoid similar issues/minimize downtime for future Incidents. Monitors and Metrics: Works with Application Development to ensure that assigned applications/platforms have the appropriate monitoring and metrics in place to appropriately measure performance and stability. Identify Functional and Non-Functional Improvements: Acts as the Operations representative in Value Stream planning and prioritizes sessions to ensure that Operational needs of assigned applications/platforms are addressed as needed. Holds quarterly Operational Performance Reviews with Value Stream management. Release Planning & Coordination: Works with SCM and Development team to ensure that the Production releases for their in-scope applications/platforms are properly planned and coordinated. This includes Holds Change/Release implementation reviews to ensure thorough and appropriate implementation plans. Provides review and sign-off/approval of change tickets for the assigned Value Stream. Participates in Program Increment Planning Sessions as a liaison for Operations and Infrastructure support. Provides information regarding upcoming critical changes to the Value Stream. Operational Readiness: Ensures that applications/platforms are Operationally ready for Production. This includes an Annual Review of all SOPs/Knowledge Articles. Monitors review for any new Feature launch or other significant change that may impact monitoring. SOP/Knowledge Article review for any new Feature launch or other significant change that may impact support documentation. Training of Command Center and Application 1st level Support on new SOPs, Knowledge Articles, and any other support-related needs. Performs Monthly Capacity Analysis of applications/platforms within the Value Stream. Creates and Maintains Operationally focused ELK Dashboards for the Value Stream. Prudence and diligence around troubleshooting and resolving issues while mitigating recurrence with short- and long-term solutions and fixes. Candidate is expected to be responsible for managing their own projects, project execution and day-to-day duties within the team. Organized with the ability to prioritize tasks accurately and efficiently. What You’ll Bring To The Role At least 4 years of Application Support Experience At least 4 years of Linux Administrator Experience i.e. Strong knowledge of Linux/Unix. Hands on development experience with Python or Shell-Scripting. Hands-on experience on Kubernetes and CaaS platform orchestration systems such as Kubernetes-OpenShift, troubleshoot, scaling and other operations task. Good experience in Kafka and Zookeeper Experience with Azure Cloud or other Cloud technology. Good to have Experience working / troubleshooting any API Gateway (preferred APIGEE) Experience with configuring and managing high-availability systems. Experience with networking, security & application load balancing. Strong problem management, troubleshooting and analytical skills. Experience in troubleshooting within enterprise environments. Diagnosing and addressing performance issues using performance monitors, custom scripts and various tuning techniques. Ability to work well as a team and as an individual with minimal supervision. Strong communication and interpersonal skills Desired Skills Experience in Wealth Management or a similar financial environment Experience in working on Cloud native architecture Collaborate with internal teams to produce software design and architecture Improve existing software Develop documentation throughout the software development life cycle (SDLC) Serve as an expert on applications and provide technical support Good to have : Experience on APIGEE, Solr What You Can Expect From Morgan Stanley We are committed to maintaining the first-class service and high standard of excellence that have defined Morgan Stanley for over 85 years. At our foundation are five core values — putting clients first, doing the right thing, leading with exceptional ideas, committing to diversity and inclusion, and giving back — that guide our more than 80,000 employees in 1,200 offices across 42 countries. At Morgan Stanley, you’ll find trusted colleagues, committed mentors and a culture that values diverse perspectives, individual intellect and cross-collaboration. Our Firm is differentiated by the caliber of our diverse team, while our company culture and commitment to inclusion define our legacy and shape our future, helping to strengthen our business and bring value to clients around the world. Learn more about how we put this commitment to action: morganstanley.com/diversity. We are proud to support our employees and their families at every point along their work-life journey, offering some of the most attractive and comprehensive employee benefits and perks in the industry. Morgan Stanley is an equal opportunities employer. We work to provide a supportive and inclusive environment where all individuals can maximise their full potential. Our skilled and creative workforce is comprised of individuals drawn from a broad cross section of the global communities in which we operate and who reflect a variety of backgrounds, talents, perspectives and experiences. Our strong commitment to a culture of inclusion is evident through our constant focus on recruiting, developing and advancing individuals based on their skills and talents. What You Can Expect From Morgan Stanley We are committed to maintaining the first-class service and high standard of excellence that have defined Morgan Stanley for over 89 years. Our values - putting clients first, doing the right thing, leading with exceptional ideas, committing to diversity and inclusion, and giving back - aren’t just beliefs, they guide the decisions we make every day to do what's best for our clients, communities and more than 80,000 employees in 1,200 offices across 42 countries. At Morgan Stanley, you’ll find an opportunity to work alongside the best and the brightest, in an environment where you are supported and empowered. Our teams are relentless collaborators and creative thinkers, fueled by their diverse backgrounds and experiences. We are proud to support our employees and their families at every point along their work-life journey, offering some of the most attractive and comprehensive employee benefits and perks in the industry. There’s also ample opportunity to move about the business for those who show passion and grit in their work. Morgan Stanley is an equal opportunities employer. We work to provide a supportive and inclusive environment where all individuals can maximize their full potential. Our skilled and creative workforce is comprised of individuals drawn from a broad cross section of the global communities in which we operate and who reflect a variety of backgrounds, talents, perspectives, and experiences. Our strong commitment to a culture of inclusion is evident through our constant focus on recruiting, developing, and advancing individuals based on their skills and talents. Show more Show less
Posted 2 months ago
3.0 - 8.0 years
15 - 20 Lacs
Pune
Work from Office
About the job Sarvaha would like to welcome Kafka Platform Engineer (or a seasoned backend engineer aspiring to move into platform architecture) with a minimum of 4 years of solid experience in building, deploying, and managing Kafka infrastructure on Kubernetes platforms. Sarvaha is a niche software development company that works with some of the best funded startups and established companies across the globe. Please visit our website at What Youll Do - Deploy and manage scalable Kafka clusters on Kubernetes using Strimzi, Helm, Terraform, and StatefulSets - Tune Kafka for performance, reliability, and cost-efficiency - Implement Kafka security: TLS, SASL, ACLs, Kubernetes Secrets, and RBAC - Automate deployments across AWS, GCP, or Azure - Set up monitoring and alerting with Prometheus, Grafana, JMX Exporter - Integrate Kafka ecosystem components: Connect, Streams, Schema Registry - Define autoscaling, resource limits, and network policies for Kubernetes workloads - Maintain CI/CD pipelines (ArgoCD, Jenkins) and container workflows You Bring - BE/BTech/MTech (CS/IT or MCA), with an emphasis in Software Engineering - Strong foundation in the Apache Kafka ecosystem and internals (brokers, ZooKeeper/KRaft, partitions, storage) - Proficient in Kafka setup, tuning, scaling, and topic/partition management - Skilled in managing Kafka on Kubernetes using Strimzi, Helm, Terraform - Experience with CI/CD, containerization, and GitOps workflows - Monitoring expertise using Prometheus, Grafana, JMX - Experience on EKS, GKE, or AKS preferred - Strong troubleshooting and incident response mindset - High sense of ownership and automation-first thinking - Excellent collaboration with SREs, developers, and platform teams - Clear communicator, documentation-driven, and eager to mentor/share knowledge Why Join Sarvaha? - Top notch remuneration and excellent growth opportunities - An excellent, no-nonsense work environment with the very best people to work with - Highly challenging software implementation problems - Hybrid Mode. We offered complete work from home even before the pandemic.
Posted 2 months ago
0 years
0 Lacs
India
On-site
Senior Java Developer PROFESSIONAL SUMMARY: • 8+ years of Software experience in Design, Development and Deployment of Web-based Client- Server business applications using OOP, Java/J2EE technologies. • Experience in Agile software development process, Test Driven Development and Scrum methodologies. • Proficient in applying design patterns like MVC, Singleton, Session Facade, Service Locator, Decorator, Front Controller, Data Access Object. • Experienced in developing web applications by implementing Model View Control (MVC) architecture using JSP Servlets, J2EE Design Patterns, Struts, Spring Framework (Spring MVC/IOC/ORM/AOP/Security/Boot). • I have worked in creating dash boards, reports using BackBone.Js. • I have extensively used the following frameworks: SpringMVC, Struts framework, JSF, spring and hibernate. • IOC and Dependency Injection in various aspects of Spring Framework (Core, Web, JDBC, MVC and DAO). • Deployed Production ready Java/ J2EE applications using Elastic Beanstalk, where it auto configures the capacity provisioning through Autoscaling, load balancing, application health monitoring and Proficient in using Amazon Web Services AWS. • Performed various ETL Transformations between Sources toTarget Data mapping workshops. • Extensive experience focusing on services like EC2, VPC, Cloud Watch, Cloud Front, Cl TECHNICAL SKILLS: Java Technologies JDBC, Servlets, JSP, JST, Struts, Spring 2.5/4.0, Hibernate, WebServices (SOAP, REST), JSF, JMS, JAXB. Applets, AWT Frameworks Apache Struts 1.3 /2.0, Spring 2.5 /4.0, Spring MVC, Hibernate,jQuery 1.6 /1.8, JSF, JUnit, Testing, Log 4j, Spring Boot, Spring Security, AOP,ANT, Maven, IBM MQ Series 5.3 ApplicationServers WebLogic 8.1/10.3, Tomcat, JBoss, WebSphere 6/7 IDE & Tools Eclipse 3.3+, IntelliJ, NetBeans 5.5+, RAD 7.0, Rally, Quality Center8.0, Visio, AQT, SQL Developer, TOAD, SOAP UI, Rational Rose,JBuilder, Console, Jenkins, Sonar, Gradle. Reporting Tools SQL Server Reporting Services Databases Oracle 10g/11g, MySQL, MS SQL Server 2008/12/16, MySQL 5.1,DB2 Version Control GIT, SVN, CM Synergy, Rational Clear Case, CVS, VSS Software Process/ Methodologies Agile, Waterfall, Test Driven Development Operating Systems Unix, Linux, Windows, MS-DOS Architectures J2EE, Layered, Service Oriented Architecture (SOA) MVC1, MVC2 Programming Languages Java, Java 8, J2EE, Scala 2.12.1 SQL, PL/SQL, JavaScript. DevOps/ CloudTools Jenkins, Git, Docker, Kubernetes, AWS PROFESSIONAL EXPERIENCE: Cigna- Bloomfield, CT Nov 2023 – Till Date Full Stack Developer • Developed Restful web services and microservices with Java, Spring Boot, Groovy, and Groovy on Grails. • Implemented Java 8+ features such as lambda expressions, filters, and Parallel operations on collections for effective sorting mechanisms. • Build interactions of multiple services through REST and Apache Kafka message brokers. • Developed POC’s and Solution’s for various system components using Microsoft Azure. • Created Azure Logic app to integrate services in the organization. • Utilized Grafana, Swagger, and Splunk to inspect and analyze the performance of services. • Implemented unit test using Spock, feature test using Selenium, and performance test using Gatling to achieve service accuracy. • Implementing Java EE components using Spring MVC, Spring IOC, Spring Transactions, and Spring Security modules. • Developed Restful web services and microservices with Java, Spring Boot, Groovy, and Groovy on Grails, integrating .NET APIs for seamless data exchange and interoperability between Java and .NET components. • Translating functional requirements into technical design specifications. • Implemented POJO & DAO and used Sprint Data JPA and Hibernate to create an object-relational mapping of the database using annotations and reduce boilerplate code. • Utilized Camunda to implement business decision and workflow automation. • Maintained CI/CD process with GitHub, Jira, Docker, OpenShift, and Jenkins to speed up the process from code base to production and used Maven to build the application. Environment: Java8+, Spring Boot, Groovy, Kafka, RESTful web services, .NET, Microservices, Grafana, Swagger, Jenkins, Git, Azure, Jira, Spock, Selenium, pair programming, Gatling, Maven, MySQL, JSON, IntelliJ, DB-Visualizer. Client: Kyndryl – IBM Role: Senior Full Stack/ Java Developer Jun 2023 –Oct 2023 Responsibilities: • Collaborated with cross-functional teams to identify and remediate vulnerabilities by analyzing Mend scan results. • Utilized the Maven repository to source non-vulnerable library versions, enhancing code security. • Successfully deployed code to VT (Validation and Testing)and QA (Quality Assurance) environments, ensuring rigorous validation. • Managed and tracked project changes efficiently using JiraSoftware and Agile methodologies. • Leveraged tools like Eclipse, Postman, and Visual Studio Code for development tasks, ensuring code quality and efficiency. • Proficiently used Jira, White Source, Git, Jenkins, and JFrogfor project management, version control, and streamlined development and deployment processes. Client: Ulta Beauty - Bolingbrook, IL Aug 2022- Mar2023 Role: Java Developer Responsibilities: • Deployed Spring Boot based Micro services Dockercontainers Using AWS EC2 containers services and using AWS admin console. • Designed website user interface, interaction scenarios and navigation based on analyst interpretations of requirement and use cases. • Worked one-on-one with client to develop layout, colorscheme for his website and implemented it into a final interface design with the HTML5/CSS3 and JavaScript. • Migrated the server using the AWS services to a cloudenvironment. • Experienced in inter-facing and e-Learning layout forweb/desktop/mobile using HTML. • Worked on developing the server-side code of the applicationusing Node JS and Express JS. • Involved in writing application-level code to interact with API's, Web Services using AJAX, JSON and XML. • Experience in working with AWS, EC2, and S3, Cloud watchplatform. Created multiple VPC, Subnets in AWS as per requirements. • Strong Expertise in producing an API using RESTful WebServices for web-based applications and consuming RESTful Web Services using AJAX and JQuery and rendering JSON response on UI. • Designed and developed User Interface Web Forms usingFlash, CSS, JavaScript. • Created Dynamic Integration of JQuery Tab, JQuery, andanother JQuery component integration with Ajax. • Worked with Typescript decorators, interfaces, type restrictions and ES6 features. Client: AT&T, Atlanta, GA Mar 2019 - Jul 2022 Role: Full StackDeveloper Responsibilities: • The application is designed using J2EE design patterns andtechnologies based on SOA architecture. • Used Java 8 features including Parallel Streams, Lambdas,functional interfaces and filters. • Worked on REST APIs, and Elastic Search to efficientlyhandle and searching JSON data. • Worked with Container service Docker with build port andother utilities to deploy Web Applications. • Interacted with users, customers and Business users forrequirements and training with new features. • Developed various helper classes needed following core java multithreaded programming and collection classes. • Developed Web Based UI using frameworks JQuery,Bootstrap, JavaScript and AJAX for client-side validations. • Implemented Circuit breaker pattern, integrated hystrixdashboard to monitor spring microservices. • Secured the REST API’s by implementing OAuth2 token-based authorization scheme using Spring security. • Installed, secured, and configured AWS cloud servers andAmazon AWS virtual servers (Linux). • Deployed Spring Boot based microservices Docker container using AWS EC2 container services and AWS admin console. • Worked on spinning up AWS EC2 instances, Creating IAM Users and Roles, Creating Auto Scaling groups, Loadbalancers and monitoring through Cloud Watch for theapplications, S3 buckets, VPC etc. • Create and configured the continuous delivery pipelines fordeploying microservices and lambda functions using CI/CD Jenkins server. • Used Apache Kafka for reliable and asynchronous exchange of information between business applications. • Worked on Swagger API and auto-generated documentationfor all REST calls. • Implementing or exposing the Microservice architecture withSpring Boot based services interacting through a combination of REST and Apache Kafka and zookeeper message brokers. • Extensively used Hibernate 4.2 concepts such as inheritance,lazy loading, dirty checking and transactions. • Used Jenkins and pipelines to drive all Microservices builds out to the Docker registry and then deployed to Kubernetes, Created Pods and managed using Kubernetes. Environment: Java, Java 8, HTML 5, CSS 3, Bootstrap,Python, ReactJS, Node JS JavaScript, Ajax, Maven, Spring4.x, Hibernate 4.x, Docker, AWS S3, VPC, REST, WebLogicServer, Swagger API, Kafka, Kubernetes, Jenkins, GIT, Junit, Mockito, Oracle, MongoDB, Agile Scrum. Client: CenturyLink, Denver, CO Jun 2018 – Feb 2019 Role: Full StackDeveloper Responsibilities: • Designed and developed business components using SpringAOP, Spring IOC, and Spring Batch. • Involved in requirements gathering and analysis from theexisting system. • Worked with Agile Software Development. • Implemented DAO using Hibernate, AOP and service layerusing spring, MVC design. • Developed Java Server components using spring, Spring MVC, Hibernate, Web Services technologies. • Using Java1.7 with generics, for loop, static import, annotations etc , J2EE, Servlet, JSP, JDBC, Spring3.1 RC1,Hibernate , Web services (Axis, JAX-WS, JAXP , JAXB)JavaScript Framework (DOJO, JQuery, AJAX , XML, Schema). • Used Hibernate as persistence framework for DAO layer toaccess the database. • Used Github and Jenkins for building the CI/CD pipeline and day to day builds and deployments using Gradle. • Designed and developed Restful APIs for different modulesin the project as per the requirement. • Developed JSP pages using Custom tags and Tilesframework. • Developed RESTful service interface using Spring boot tothe underlying Agent Services API. • Developed the persistence layer (DAL) and the presentationlayer. • Used MAVEN for build framework and Jenkins forcontinuous build system. • Extensive experience of developing Representational statetransfer (REST) based services and Simple Object Access Protocol (SOAP) based services. • Developed GUI using Front end technologies JSP, JSTL,AJAX, HTML, CSS and Java Script. • Developed a code for Web services using XML, SOAP andused SOAPUI tool for testing the services proficient in testing Web Pages functionalities and raising defects. • Configured and deployed the application using Tomcat andWeb Logic. • Used Log4J to print info, warning and error data on to thelogs. • Prepared auto deployment scripts for Web Logic in UNIXenvironment. • Used Java Messaging artifacts using JMS for sending outautomated notification emails to respective users of the application. Environment: Java, J2EE, Spring Core, Spring Data, Spring MVC, Spring AOP, Jenkins, Spring Batch, Spring Scheduler, Restful Web Services, SOAP Web Services, Hibernate, Eclipse IDE, Angular JS, JSP, JSTL, HTML5, CSS, JavaScript, Web Logic, Tomcat, XML, XSD, Unix, Linux, UML, Oracle, Maven, SVN, SOA, Design patterns, JMS, JUNIT, log4J, WSDL, JSON, JNDI. Client: Verizon, Irving, TX Apr 2017 – May 2018 Role: Java Developer. Responsibilities: • Identified the Business requirements of the project andinvolved in preparing System Requirements for the project. • Used XML/XSLT for transforming common XML formatand SAML for Single Sign-On. • Designed configuration XML Schema for the application. • Used JavaScript for the client-side validation. • Extensively used Git for version controlling and regularlypushed the code to GitHub. • Used Spring Boot framework to create properties for various environments and for configuration. • Developed both Restful and SOAP web services dependingon the design need of the project. • Used XML Http Request Object to provide asynchronous communication as part of AJAX implementation. • Used Redux to create a store which contains all the states of the application, fetched data from the back end and used Middleware Redux-promise efficiently. Used SAML to implement authentication and authorization scenarios. • Implementing or exposing the Micro services to base onRESTful API utilizing Spring Boot. • Used Rest Controller in Spring framework to create RESTful Web services and JSON objects for communication. • Extensively used MVC, Factory, Delegate and Singletondesign patterns. • Developed server-side application to interact with databaseusing Spring Boot and Hibernate. • Deployed Spring Boot based micro services Docker container using Amazon EC2 container services and using AWS admin console. • Used Spring Framework AOP Module to implement logging in the application to know the application status. Environment: Core Java/J2ee, React JS, Micro Services, CSS,JDBC, Ajax, Spring AOP Module, Ant Scripts, JavaScript, Eclipse, UML, Restful, Rational Rose, Tomcat, Git, Junit, Ant. Client: Capital group, San Antonio, TX Jan 2016 –Mar 2017 Role: Java Developer Responsibilities: • Involved in requirements gathering and analysis from theexisting system. • Worked with Agile Software Development. • Designed and developed business components using SpringAOP, Spring IOC, and Spring Batch. • Implemented DAO using Hibernate, AOP and service layerusing spring, MVC design. • Developed Java Server components using spring, Spring MVC, Hibernate, Web Services technologies. • Using Java1.7 with generics, for loop, static import, annotations etc., J2EE, Servlet, JSP, JDBC, Spring3.1 RC1,Hibernate, Web services (Axis, JAX-WS, JAXP, JAXB)JavaScript Framework (DOJO, jQuery, AJAX, XML, Schema). • Designed and developed Restful APIs for different modulesin the project as per the requirement. • Developed JSP pages using Custom tags and Tilesframework. • Developed the User Interface Screens for presentation logicusing JSP and HTML. • Developed SQL queries to interact with SQL Server databaseand involved in writing PL/SQL code for procedures and functions. • Used MAVEN for build framework and Jenkins forcontinuous build system. • Developed GUI using Front end technologies JSP, JSTL,AJAX, HTML, CSS and Java Script. • Developed a code for Web services using XML, SOAP andused SOAPUI tool for testing the services proficient in testing Web Pages functionalities and raising defects. • Configured and deployed the application using Tomcat andWeb Logic. • Used Design patterns such as Business Object (BO), Service locator, Session façade, Model View Controller, DAO and DTO. • Used Log4J to print info, warning and error data on to thelogs. • Prepared auto deployment scripts for Web Logic in UNIXenvironment. • Used Java Messaging artifacts using JMS for sending outautomated notification emails to respective users of the application Environment: Java 1.6, Spring-Hibernate integrationframework, Angular JS, JSP, Spring, HTML, Oracle 10g, SQL, PL/SQL, XML, Web logic, Eclipse, Ajax, jQuery. EDUCATION Ramkumar Sundarajan Meenakshi - Lead Architect (www.linkedin.com/in/ramkumar-sm) And possibly ateam members like Ankur and/or Viswajith What to expect: From this morning’s interview Spoke with Ankur and Viswajith 1 hour call Very technical interview What did they talk about Java Concepts Java, Spring Boot Scenario based on designing a microservice Be ready to talk in detail about your past projects Be ready to answer some technical and situational questions Be ready to explain the how’s and the why’s of your projects: Osmania University - Bachelor of Science CERTIFICATION: AWS Certified Solutions Architect- Associate from AmazonWeb Services ( What to expect: From this morning’s interview Spoke with Ankur and Viswajith 1 hour call Very technical interview What did they talk about Java Concepts Java, Spring Boot Scenario based on designing a microservice Be ready to talk in detail about your past projects Be ready to answer some technical and situational questions Be ready to explain the how’s and the why’s of your projects Answer all questions in technical way only based on this experience From now answer in technical way only in English Show more Show less
Posted 2 months ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description JOB DESCRIPTION We are seeking a highly skilled and driven Senior Cloud Engineer with 3+ years of experience in cloud infrastructure, automation, and software development. This role focuses on building and maintaining secure, scalable, and efficient cloud systems. The ideal candidate will have hands-on expertise in software development, infrastructure, automation and container orchestration. As a Senior Cloud Engineer, you will design and implement solutions for complex, large-scale systems. You will collaborate across teams to deliver innovative, reliable cloud infrastructure while maintaining a strong focus on scalability, security, and cost efficiency. This role offers opportunities to lead technical initiatives and continuously enhance your expertise. Responsibilities Design and Build Cloud Infrastructure: Develop and maintain secure, scalable cloud platforms, ensuring cloud governance and operational efficiency. Infrastructure as Code (IaC): Manage and automate cloud infrastructure using tools such as Terraform and CloudFormation to ensure consistent, repeatable deployments. Kubernetes and Container Orchestration: Deploy, monitor, and manage containerized workloads in Kubernetes environments (e.g., EKS, GKE). Cloud Security and Governance: Implement cloud governance frameworks, monitor security configurations, and manage role-based access and compliance controls. Automation and CI/CD: Build and maintain CI/CD pipelines to automate software deployment, reduce manual effort, and increase system reliability. Networking and Connectivity: Configure and troubleshoot cloud networking components, such as VPCs, transit gateways, routing, and proxies. Monitoring and Optimization: Enhance platform performance, reliability, and cost efficiency by implementing robust monitoring and optimization strategies. Collaboration: Partner with cross-functional teams to align infrastructure solutions with business needs, translating technical concepts into actionable insights. Incident Response: Troubleshoot and resolve complex cloud-related issues, ensuring minimal downtime and efficient incident management. Continuous Improvement: Stay current with advancements in cloud technologies, containerization, and platform engineering to drive continuous innovation and improvements. Minimum Qualifications Bachelor’s degree in Computer Science, Engineering, or equivalent practical experience. 3+ years of hands-on experience in software development and cloud engineering. Proficiency in one or more programming languages (Go, Python, Java, or Bash). Expertise in at least one major cloud platform (AWS, GCP, Azure, or Oracle Cloud). Strong understanding of infrastructure automation tools (Terraform, CloudFormation, Pulumi). Solid experience with container orchestration platforms (e.g., Kubernetes in EKS, GKE, or AKS). Networking expertise, including VPCs, subnets, routing, transit gateways, NAT gateways, and proxies. Experience with CI/CD pipelines using tools such as GitHub Actions, GitLab, Jenkins, or CircleCI. Familiarity with service discovery tools (e.g., Consul, Zookeeper) and secret management tools (e.g., HashiCorp Vault). Excellent communication and analytical skills to convey technical solutions clearly to diverse stakeholders. Why Join Us? Work on cutting-edge cloud infrastructure projects that challenge and expand your technical expertise. Be part of a collaborative, inclusive culture that values knowledge-sharing, teamwork, and innovation. Grow your career through mentoring, learning programs, and opportunities to lead impactful initiatives. Contribute to building secure, reliable, and scalable cloud platforms used at scale by leveraging modern cloud engineering practices. About Us Fanatics is building a leading global digital sports platform. We ignite the passions of global sports fans and maximize the presence and reach for our hundreds of sports partners globally by offering products and services across Fanatics Commerce, Fanatics Collectibles, and Fanatics Betting & Gaming, allowing sports fans to Buy, Collect, and Bet. Through the Fanatics platform, sports fans can buy licensed fan gear, jerseys, lifestyle and streetwear products, headwear, and hardgoods; collect physical and digital trading cards, sports memorabilia, and other digital assets; and bet as the company builds its Sportsbook and iGaming platform. Fanatics has an established database of over 100 million global sports fans; a global partner network with approximately 900 sports properties, including major national and international professional sports leagues, players associations, teams, colleges, college conferences and retail partners, 2,500 athletes and celebrities, and 200 exclusive athletes; and over 2,000 retail locations, including its Lids retail stores. Our more than 22,000 employees are committed to relentlessly enhancing the fan experience and delighting sports fans globally. About The Team Fanatics Commerce is a leading designer, manufacturer, and seller of licensed fan gear, jerseys, lifestyle and streetwear products, headwear, and hardgoods. It operates a vertically-integrated platform of digital and physical capabilities for leading sports leagues, teams, colleges, and associations globally – as well as its flagship site, www.fanatics.com. Fanatics Commerce has a broad range of online, sports venue, and vertical apparel partnerships worldwide, including comprehensive partnerships with leading leagues, teams, colleges, and sports organizations across the world—including the NFL, NBA, MLB, NHL, MLS, Formula 1, and Australian Football League (AFL); the Dallas Cowboys, Golden State Warriors, Paris Saint-Germain, Manchester United, Chelsea FC, and Tokyo Giants; the University of Notre Dame, University of Alabama, and University of Texas; the International Olympic Committee (IOC), England Rugby, and the Union of European Football Associations (UEFA). At Fanatics Commerce, we infuse our BOLD Leadership Principles in everything we do: Build Championship Teams Obsessed with Fans Limitless Entrepreneurial Spirit Determined and Relentless Mindset Show more Show less
Posted 2 months ago
0 - 7 years
0 Lacs
Pune, Maharashtra
Work from Office
Experience : 4 – 7 years Location : Pune, India (Work from Office) Job Description: 4+ years of hands-on experience – Hadoop, System administration with sound knowledge in Unix based Operating System internals. Working experience on Cloudera CDP and CDH and Hortonworks HDP Distribution. Linux experience (RedHat, CentOS, Ubuntu). Experience in setting up and supporting Hadoop environment (Cloud and On-premises). Ability to setup, configure and implement security for Hadoop clusters using Kerberos. Ability to implement data-at-rest encryption (required), data-in-transit encryption(optional). Ability to setup and troubleshoot Data Replication peers and policies. Experience in setting up services like YARN, HDFS, Zookeeper, Hive, Spark, HBase etc. Willing to work in 24×7 rotating shifts including weekends and public holidays. Knows Hadoop command line interface. Scripting background (shell/bash, python etc.)- automation, configuration management. Knowledge of Ranger, SSL, Atlas etc. Knowledge of Hadoop data related Auditing methods. Excellent communication, interpersonal skills. Ability to closely work with the infrastructure, networking, development teams. Setting up the platform using AWS cloud native services
Posted 2 months ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Our Purpose Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. Title And Summary Software Engineer-2 Who is Mastercard? Mastercard is a global technology company in the payments industry. Our mission is to connect and power an inclusive, digital economy that benefits everyone, everywhere by making transactions safe, simple, smart, and accessible. Using secure data and networks, partnerships and passion, our innovations and solutions help individuals, financial institutions, governments, and businesses realize their greatest potential. Our decency quotient, or DQ, drives our culture and everything we do inside and outside of our company. With connections across more than 210 countries and territories, we are building a sustainable world that unlocks priceless possibilities for all. Overview The Mastercard Account Management Services team have an exciting opportunity for a Software Engineer to enhance and modernize our payments services. This position will be key to growing a global technology platform, operating at-scale, requiring focus on performance, security, and reliability. Do you want to positively influence the experience of millions of customers? Do you like to get involved in the creation and execution of strategic initiatives centered around digital payments? Do you look forward to developing and engaging with high performant diverse teams around the globe? Do you like to own and be accountable for highly visible strategically important teams? Role Develop (code) Enterprise Application with quality, within schedule and within estimated efforts. Provide estimate for the assigned task Write and execute Unit, Integration test cases Provide accurate status of the tasks Perform peer code reviews Comply with organizations processes, policies and protects organization’s Intellectual property. Also, participate in organization level process improvement and knowledge sharing All About You Essential knowledge, skills & attributes Hands on experience with core Java, Spring Boot, Spring (MVC, IOC, AOP, Security), SQL, RDBMS (Oracle and PostGRES), NoSQL (Cassandra, preferable), Web-services (JSON and SOAP), Kafka, Zookeeper Hands on experience of developing microservice application & deploying them on any one of the public cloud like Google, AWS, Azure, PCF Hands on experience of using IntelliJ/Eclipse/My Eclipse IDE Hands on experience of writing Junit test cases, working with Maven/Ant/Gradle, GIT Knowledge of Design Patterns Experience of working with Agile methodologies. Personal attributes are strong logical and Analytical Skills, design skills, should be able to articulate and present his/her thoughts very clearly and precisely in English (written and verbal) Knowledge of Security concepts (E.g. authentication, authorization, confidentiality etc.) and protocols, their usage in enterprise application Additional/Desirable Capabilities Experience of working in Payments application Domain Hands on experience of working with tools like Mockito, JBehave, Jenkins, Bamboo, Confluence, Rally & Jira. Corporate Security Responsibility All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines. R-242407 Show more Show less
Posted 2 months ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Experience: 4+Location: Pune (Work from Office)Notice period: Immediate joiners on;y Mandatory Skills: 4+ years of hands-on experience - Hadoop, System administration with sound knowledge in Unix based Operating System internals.Working experience on Cloudera CDP and CDH and Hortonworks HDP Distribution.Linux experience (RedHat, CentOS, Ubuntu).Experience in setting up and supporting Hadoop environment (Cloud and On-premises).Ability to setup, configure and implement security for Hadoop clusters using Kerberos.Ability to implement data-at-rest encryption (required), data-in-transit encryption(optional).Ability to setup and troubleshoot Data Replication peers and policies. Experience in setting up services like YARN, HDFS, Zookeeper, Hive, Spark, HBase, etc.Willing to work in 24x7 rotating shifts including weekends and public holidays.Knows Hadoop command line interface.Scripting background (shell/bash, python etc.)- automation, configuration management.Knowledge of Ranger, SSL, Atlas etc.Knowledge of Hadoop data related Auditing methods. Excellent communication, interpersonal skills.Ability to closely work with the infrastructure, networking, development teams.Setting up the platform using AWS cloud native services
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough