Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
2.0 years
0 Lacs
India
On-site
We’re building the next-generation communications analytics and automation platform—one that fuses deep telemetry, enterprise-scale voice/calling data, and AI-driven remediation. As a Senior Backend Engineer , you'll play a core role in designing the resilient, scalable backend of a high-visibility platform that already drives action across global Microsoft Teams deployments. This isn’t a maintenance gig. This is architecture, orchestration, and ownership. You’ll help design microservices, implement scalable APIs, and ensure data flows seamlessly from complex real-time systems (like call quality diagnostics and device telemetry) into actionable intelligence and automation pipelines. If you’re excited by backend systems with real-world impact—and want to transition into intelligent agentic systems powered by GenAI—this role is built for you. What You'll Work On Platform Engineering (Core Backend) Design and implement robust, cloud-native services using modern backend stacks (Node.js, Python, .NET Core, or similar). Build scalable APIs to surface data and actions across TeamsCoreIQ modules (call analytics, device insights, policy management, AI-based RCA). Integrate with Microsoft Graph APIs and Teams Calling infrastructure (Auto Attendants, Call Queues, Call Quality, Presence, Policies). Develop event-driven workflows using queues (Service Bus, Kafka, RabbitMQ) for high-throughput ingestion and action pipelines. Work with real-time data stores, telemetry ingestion, and time-series analytics backends (PostgreSQL, MongoDB, InfluxDB, or equivalent). Infrastructure & DevOps Support Help scale and secure workloads using Azure, Kubernetes, and CI/CD pipelines (GitHub Actions, Azure DevOps). Implement observability practices—logging, metrics, alerting—for zero-downtime insights and RCA. Future-Forward (Agentic Track) Support the evolution of the backend toward intelligent agent orchestration: Build services that allow modular “agents” to retrieve, infer, and act (e.g. provisioning, remediation, escalation). Explore interfaces for integrating OpenAI, Azure AI, or RAG pipelines to make automation contextual and proactive. What You Bring Must-Have Technical Skills 2+ years backend engineering experience with production-grade systems. Strong proficiency in at least one modern backend language (Node.js, Python, Go, or .NET Core). Deep understanding of RESTful API design, GraphQL is a bonus. Experience building cloud-native apps on Azure (preferred), AWS or GCP. Familiarity with Microsoft ecosystem: Graph API, Teams, Entra ID (AAD), SIP/VoIP call data a big plus. Experience with relational and NoSQL databases; data modeling and performance tuning. Bonus (Not Mandatory, but Highly Valued) Exposure to AI/ML pipelines, LangChain, OpenAI API, or vector databases (Pinecone, Weaviate). Background in observability, root-cause analysis systems, or voice analytics. Experience with policy engines, RBAC, and multi-tenant SaaS platforms. Traits We Love Systems Thinker – You optimize for scale and understand how backend services interact across a distributed system. Builder’s DNA – You love to own, refine, and ship high-quality features fast. Learning Velocity – You’re interested in agentic architectures, GenAI, and eager to transition toward intelligent orchestration. Code Ethic – You write clean, maintainable, testable code—and always think security-first. Performance Expectations (First 30 Days) ü Ship a core modules with full test coverage and observability. ü Deliver API endpoints for at least one major module (e.g. RCA, Call Analytics, DeviceIQ). ü Draft and refine at least one reusable internal service that improves time-to-market for future agents. ü Collaborate with frontend, DevOps, and AI teams to support rapid iteration and experimentation. Tips: Provide a summary of the role, what success in the position looks like, and how this role fits into the organization overall.
Posted 1 week ago
4.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Head of Architecture and Technology (Hands-On, High-Ownership) Company: Elysium PTE. LTD. Location: Chennai, Tamil Nadu — at office Employment Type: Full-time, permanent Compensation: ₹15 L fixed CTC + up to 5 % ESOP (performance-linked vesting, 4-year schedule with 1-year cliff) Reports to: Founding Team ________________________________________ About Elysium Elysium is a founder-led studio headquartered in Singapore with its delivery hub in Chennai. We are currently building a global gaming-based mar-tech platform while running a premium digital-services practice (branding, immersive web, SaaS MVPs, AI-powered solutions). We thrive on speed, experimentation and shared ownership. ________________________________________ The opportunity We’re looking for a hungry technologist who can work in an early-stage start-up along with the founders to build ambitious global products & services. You’ll code hands-on every week, shape product architecture, and grow a lean engineering pod—owning both our flagship product and client deliveries. ________________________________________ What you will achieve in your first 12 months • Co-ordinate & develop the In-house products with internal & external teams. • Build and mentor a six-to-eight-person engineering/design squad that hits ≥ 85 % on time delivery for IT-service clients. • Cut mean time-to-deployment to under 30 minutes through automated CI/CD and Infrastructure-as-Code. • Implement GDPR-ready data flows and a zero-trust security baseline across all projects. • Publish quarterly tech radars and internal playbooks that keep the team learning and shipping fast. ________________________________________ Day-to-day responsibilities • Resource management & planning using the internal & external teams with respect to our products & client deliveries. • Pair-program and review pull requests to enforce clean, testable code. • Translate product/user stories into domain models, sprint plans and staffing forecasts. • Design cloud architecture (AWS / GCP) that balances cost and scale; own IaC, monitoring and on-call until an SRE is hired. • Evaluate and manage specialist vendors for parts of the flagship app; hold them accountable on quality and deadlines. • Scope and pitch technical solutions in client calls; draft SoWs and high-level estimates with founders. • Coach developers and designers, set engineering KPIs, run retrospectives and post-mortems. • Prepare technical artefacts for future fundraising and participate in VC diligence. ________________________________________ Must-have Requirements • 5 – 8 years modern full-stack development with at least one product shipped to >10 k MAU or comparable B2B scale. • Expert knowledge of modern full-stack ecosystems: Node.js or Python or Go; React/Next.js; distributed data stores (PostgreSQL, DynamoDB, Redis, Kafka or similar). • Deep familiarity with AWS, GCP or Azure, including cost-optimized design, autoscaling, serverless patterns, container orchestration and IaC tools such as Terraform or CDK. • Demonstrated ownership of DevSecOps practices: CI/CD, automated testing matrices, vulnerability scanning, SRE dashboards and incident post-mortems. • Excellent communication skills, able to explain complex trade-offs to founders, designers, marketers and non-technical investors. • Hunger to learn, ship fast, and own meaningful equity in lieu of a senior-corporate pay check. ________________________________________ Nice-to-have extras • Prior work in fintech, ad-tech or loyalty. • Experience with WebGL/Three.js, real-time event streaming (Kafka, Kinesis), LLM pipelines & Blockchain. • Exposure to seed- or Series-A fundraising, investor tech diligence or small-team leadership. ________________________________________ What we offer • ESOP of up to 5 % on a 4-year vest (1-year cliff) with performance accelerators tied to product milestones. • Direct influence on tech stack, culture and product direction—your code and decisions will shape the company’s valuation. • A team that values curiosity, transparency and shipping beautiful work at start-up speed. ________________________________________
Posted 1 week ago
10.0 years
0 Lacs
Delhi, India
On-site
Company Size Mid-Sized Experience Required 10 - 15 years Working Days 5 days/week Office Location Delhi Role & Responsibilities Lead and mentor a team of data engineers, ensuring high performance and career growth. Architect and optimize scalable data infrastructure, ensuring high availability and reliability. Drive the development and implementation of data governance frameworks and best practices. Work closely with cross-functional teams to define and execute a data roadmap. Optimize data processing workflows for performance and cost efficiency. Ensure data security, compliance, and quality across all data platforms. Foster a culture of innovation and technical excellence within the data team. Ideal Candidate 10+ years of experience in software/data engineering, with at least 3+ years in a leadership role. Expertise in backend development with programming languages such as Java, PHP, Python, Node.JS, GoLang, JavaScript, HTML, and CSS. Proficiency in SQL, Python, and Scala for data processing and analytics. Strong understanding of cloud platforms (AWS, GCP, or Azure) and their data services. Strong foundation and expertise in HLD and LLD, as well as design patterns, preferably using Spring Boot or Google Guice Experience in big data technologies such as Spark, Hadoop, Kafka, and distributed computing frameworks. Hands-on experience with data warehousing solutions such as Snowflake, Redshift, or BigQuery Deep knowledge of data governance, security, and compliance (GDPR, SOC2, etc.). Experience in NoSQL databases like Redis, Cassandra, MongoDB, and TiDB. Familiarity with automation and DevOps tools like Jenkins, Ansible, Docker, Kubernetes, Chef, Grafana, and ELK. Proven ability to drive technical strategy and align it with business objectives. Strong leadership, communication, and stakeholder management skills. Preferred Qualifications Experience in machine learning infrastructure or MLOps is a plus. Exposure to real-time data processing and analytics. Interest in data structures, algorithm analysis and design, multicore programming, and scalable architecture. Prior experience in a SaaS or high-growth tech company. Perks, Benefits and Work Culture Testimonial from a designer: 'One of the things I love about the design team at Wingify is the fact that every designer has a style which is unique to them. The second best thing is non-compliance to pre-existing rules for new products. So I just don't follow guidelines, I help create them.' Skills: infrastructure,soc2,ansible,drive,data governance,redshift,gdpr,javascript,cassandra,design,spring boot,jenkins,docker,mongodb,java,tidb,elk,python,php,aws,snowflake,lld,chef,bigquery,gcp,golang,html,data,kafka,grafana,kubernetes,scala,css,hadoop,azure,redis,sql,data processing,spark,hld,node.js,google guice,compliance
Posted 1 week ago
6.0 - 8.0 years
0 - 0 Lacs
bangalore, noida, chennai
Remote
Sr IT Data Analyst We are currently seeking a Sr IT Data Analyst to perform data analysis for a data warehouse/operational data store, data marts, and other data stores in support of the Optum business. The new hire will define and maintain business intelligence/data warehouse methodologies, standards, and industry best practices. You will work with Development and QA team to develop data delivery/processing solutions and to create Data Dictionary with full description of data elements and their usage. Responsibilities Include: Gather business requirements for analytical applications in iterative/agile development model partnering with Business and IT stakeholders Create source-to-target mapping based on requirements Create rules definitions, data profiling and transformation logic Gather and prepare analysis based on requirements from internal and external sources to evaluate and demonstrate program effectiveness and efficiency, and problem solving Support Data Governance activities and be responsible for data integrity Developing scalable reporting processes and querying data sources to conduct ad hoc analyses/detailed data profiling. Research complex functional data/analytical issues Assume responsibility for data integrity, data quality among various internal groups and/or between internal and external sources Provides source system analysis and perform gap analysis between source and target systems Requirements: 5+ years of Healthcare business and data analysis experience Proficient in SQL, understands data modeling and storage concepts like Snowflake Must have an aptitude for learning new data flows quickly and participate in data quality and automation discussions. Be comfortable in working as SME educating data consumers on data profiles and issues. Must be able to take end to end responsibility in quickly solving data issues in production setting. Knowledge of Data Platforms, Data As a Service model and DataOps practices Preferred Qualifications: Highly preferred is working knowledge of Kafka, Databricks, GitHub, Airflow, Azure HealthCare Industry Claims and Eligibility experience Experience in Python scripts Knowledge of AI models
Posted 1 week ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Title: Java Full Stack Developer Exp: 5+ Years Mandate Skill: Spring Boot for backend development and be proficient in ReactJS for front-end development Required Skills ü Backend: Java, Spring Boot, Microservices, REST APIs, JPA/Hibernate ü Frontend: ReactJS, JavaScript, TypeScript, Redux ü Database: PostgreSQL, MySQL, MongoDB ü Cloud & DevOps: Docker, Kubernetes, CI/CD, GitHub Actions or Jenkins ü Messaging & Caching: Kafka, Redis ü Agile Practices: Jira, Confluence, Scrum Salary: Max 2000000 LPA We are looking for a mid-level full stack developer with a strong backend focus to join our team. The ideal candidate should have hands-on experience in Spring Boot for backend development and be proficient in ReactJS for front-end development . The candidate will be responsible for developing, enhancing, and maintaining enterprise applications while working in an Agile environment. Key Responsibilities Backend Development: Design, develop, and maintain RESTful APIs using Spring Boot and Java. Implement microservices architecture and ensure high-performance applications. Work with relational and NoSQL databases, optimizing queries and performance. Integrate with third-party APIs and messaging queues (Kafka, RabbitMQ). Frontend Development: Build and maintain user interfaces using ReactJS and modern UI frameworks. Ensure seamless API integration between front-end and back-end systems. Implement reusable components and optimize front-end performance. DevOps & Deployment: Work with Docker and Kubernetes for application deployment. Ensure CI/CD pipeline integration and automation. Collaboration & Agile Process: Work closely with onshore and offshore teams in a POD-based delivery model. Participate in daily stand-ups, sprint planning, and retrospectives. Write clean, maintainable, and well-documented code following best practices. Preferred Qualifications Prior experience working on Albertsons projects is a huge plus. Familiarity with Google Cloud Platform (GCP) or any cloud platform. Exposure to monitoring tools like Prometheus, Grafana. Strong problem-solving skills and ability to work independently.
Posted 1 week ago
7.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Title: Software Engineer - Backend (Python) Experience: 7+ Years Location : Hyderabad About the Role: Our team is responsible for building the backend components of the GenAI Platform. The Platform Offers Safe, compliant and cost-efficient access to LLMs, including Opensource & Commercial ones, adhering to Experian standards and policies Reusable tools, frameworks and coding patterns to perform various functions involved in either fine-tuning a LLM or developing a RAG-based application What you'll do here Design & build backend components of our GenAI platform on AWS. Collaborate with geographically distributed cross-functional teams. Participate in on-call rotation with the rest of the team to handle production incidents. What you'll need to succeed Must Have Skills At least 7+ years of professional backend web development experience with Python. Experience of AI and RAG Experience with DevOps & IaC tools such as Terraform, Jenkins etc. Experience with MLOps platorms such as AWS Sagemaker, Kubeflow or MLflow. Experience with web development frameworks such as Flask, Django or FastAPI. Experience with concurrent programming designs such as AsyncIO. Experience with any of the public cloud platforms like AWS, Azure, GCP, preferably AWS. Experience with CI/CD practices, tools, and frameworks. Nice To Have Skills Experience with Apache Kafka and developing Kafka client applications in Python. Experience with big data processing frameworks, preferably Apache Spark. Experience with containers (Docker) and container platorms like AWS ECS or AWS EKS. Experience with unit and functional testing frameworks. Experience with various Python packaging options such as Wheel, PEX or Conda. Experience with metaprogramming techniques in Python
Posted 1 week ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Title : Software Engineer - Backend (Python) About The Role Our team is responsible for building the backend components of MLOps platform on AWS. The backend components we build are the fundamental blocks for feature engineering, feature serving, model deployment and model inference in both batch and online modes. What You'll Do Here Design & build backend components of our MLOps platform on AWS. Collaborate with geographically distributed cross-functional teams. Participate in on-call rotation with the rest of the team to handle production incidents. What you'll need to succeed Must Have Skills Experience with web development frameworks such as Flask, Django or FastAPI. Experience working with WSGI & ASGI web servers such as Gunicorn, Uvicorn etc. Experience with concurrent programming designs such as AsyncIO. Experience with unit and functional testing frameworks. Experience with any of the public cloud platforms like AWS, Azure, GCP, preferably AWS. Experience with CI/CD practices, tools, and frameworks. Nice To Have Skills Experience with Apache Kafka and developing Kafka client applications in Python. Experience with MLOps platorms such as AWS Sagemaker, Kubeflow or MLflow. Experience with big data processing frameworks, preferably Apache Spark. Experience with containers (Docker) and container platorms like AWS ECS or AWS EKS. Experience with DevOps & IaC tools such as Terraform, Jenkins etc. Experience with various Python packaging options such as Wheel, PEX or Conda. Experience with metaprogramming techniques in Python. Primary Skills Python Development (Flask, Django or FastAPI) WSGI & ASGI web servers (Gunicorn, Uvicorn etc) AWS
Posted 1 week ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
💼Job Title: Kafka Developer 👨 💻Job Type: Fulltime 📍Location: Pune 💼Work regime: Hybrid 🔥Keywords: Kafka, Apache Kafka, Kafka Connect, Kafka Streams, and Schema Registry. Position Overview: We are looking for a Kafka Developer to design and implement real-time data ingestion pipelines using Apache Kafka. The role involves integrating with upstream flow record sources, transforming and validating data, and streaming it into a centralized data lake for analytics and operational intelligence What you will Have:- Responsibilities: Key Responsibilities : Develop Kafka producers to ingest flow records from upstream systems such as flow record exporters (e.g., IPFIX-compatible probes). Build Kafka consumers to stream data into Spark Structured Streaming jobs and downstream data lakes. Define and manage Kafka topic schemas using Avro and Schema Registry for schema evolution. Implement message serialization, transformation, enrichment, and validation logic within the streaming pipeline. Ensure exactly once processing, checkpointing, and fault tolerance in streaming jobs. Integrate with downstream systems such as HDFS or Parquet-based data lakes, ensuring compatibility with ingestion standards. Collaborate with Kafka administrators to align topic configurations, retention policies, and security protocols. Participate in code reviews, unit testing, and performance tuning to ensure high-quality deliverables. Document pipeline architecture, data flow logic, and operational procedures for handover and support. Required Skills & Qualifications : Proven experience in developing Kafka producers and consumers for real-time data ingestion pipelines. Strong hands-on expertise in Apache Kafka, Kafka Connect, Kafka Streams, and Schema Registry. Proficiency in Apache Spark (Structured Streaming) for real-time data transformation and enrichment. Solid understanding of IPFIX, NetFlow, and network flow data formats; experience integrating with nProbe Cento is a plus. Experience with Avro, JSON, or Protobuf for message serialization and schema evolution. Familiarity with Cloudera Data Platform components such as HDFS, Hive, YARN, and Knox. Experience integrating Kafka pipelines with data lakes or warehouses using Parquet or Delta formats. Strong programming skills in Scala, Java, or Python for stream processing and data engineering tasks. Knowledge of Kafka security protocols including TLS/SSL, Kerberos, and access control via Apache Ranger. Experience with monitoring and logging tools such as Prometheus, Grafana, and Splunk. Understanding of CI/CD pipelines, Git-based workflows, and containerization (Docker/Kubernetes) A little about us: Innova Solutions is a diverse and award-winning global technology services partner. We provide our clients with strategic technology, talent, and business transformation solutions, enabling them to be leaders in their field. Founded in 1998, headquartered in Atlanta (Duluth), Georgia. Employs over 50,000 professionals worldwide, with annual revenue approaching $3.0B. Delivers strategic technology and business transformation solutions globally. Operates through global delivery centers across North America, Asia, and Europe. Provides services for data center migration and workload development for cloud service providers. Awardee of prestigious recognitions including: Women’s Choice Awards - Best Companies to Work for Women & Millennials, 2024 Forbes, America’s Best Temporary Staffing and Best Professional Recruiting Firms, 2023 American Best in Business, Globee Awards, Healthcare Vulnerability Technology Solutions, 2023 Global Health & Pharma, Best Full Service Workforce Lifecycle Management Enterprise, 2023 Received 3 SBU Leadership in Business Awards Stevie International Business Awards, Denials Remediation Healthcare Technology Solutions, 2023
Posted 1 week ago
4.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Role: Senior Software Engineer Experience Required : 4-6 years Skills: Java, Springboot Location : Sector 16 , Noida Work Mode: 5 days (Work from Office) Interview Mode : Face2Face Notice Period: Immediate/Serving only About Times Internet At Times Internet, we create premium digital products that simplify and enhance the lives of millions. As India’s largest digital products company, we have a significant presence across a wide range of categories, including News, Sports, Fintech, and Enterprise solutions. Our portfolio features market-leading and iconic brands such as TOI, ET, NBT, Cricbuzz, Times Prime, Times Card, Indiatimes, Whatshot, Abound, Willow TV, Techgig and Times Mobile among many more. Each of these products is crafted to enrich your experiences and bring you closer to your interests and aspirations. As an equal opportunity employer, Times Internet strongly promotes inclusivity and diversity. We are proud to have achieved overall gender pay parity in 2018, verified by an independent audit conducted by Aon Hewitt. We are driven by the excitement of new possibilities and are committed to bringing innovative products, ideas, and technologies to help people make the most of every day. Join us and take us to the next level! About the Business Unit: Architecture and Group Initiatives (AGI) AGI owns the world-class Enterprise CMS solutions that empower all digital newsrooms within Times Internet and beyond. The solutions include state-of-the-art authoring tools with AI-enabled generative and assistive features, analytics and reporting tools and services that easily scale to the millions of requests per minute. This unique scaling need and engineering of state-of-the-art products make AGI a place of constant evolution and innovation across product, design and engineering in the ever-growing digital and print media industry landscape. About the role: We seek a highly skilled and experienced Java Senior Software Engineer to join our dynamic team who can play a key role in designing, developing, and maintaining our Internet-based applications. As a Senior Engineer, you have to actively participate in designing and implementing projects with high technical complexity, scalability, and performance implications. You will collaborate with cross-functional teams to deliver high-quality software solutions that meet customer needs and business objectives. Roles and Responsibilities Design, development, and testing of large-scale and high-performance web applications and frameworks. Create reusable frameworks through hands-on development and unit testing. Write clean, efficient, and maintainable code following best practices and coding standards. Troubleshoot and debug issues, and implement solutions on time. Participate in architectural discussions and contribute to the overall technical roadmap. Stay updated on emerging technologies and trends in Java development, and make recommendations for adoption where appropriate. Skills Required: Bachelor's degree in Computer Science, Engineering, or a related field. 4+ years of hands-on experience in Java development, with a strong understanding of core Java concepts and object-oriented programming principles. Proficiency in Spring framework, including Spring Boot, Spring MVC, and Spring Data. Experience with Kafka for building distributed, real-time streaming applications. Strong understanding of relational databases such as MySQL, including schema design and optimization. Proficiency in writing SQL Queries is a must. Experience with NoSQL Databases such as MongoDB, and Redis. Experience with microservices architecture and containerization technologies such as Docker and Kubernetes. Excellent problem-solving skills and attention to detail. Knowledge of software development lifecycle methodologies such as Agile or Scrum. Strong communication and collaboration skills. Ability to work effectively in a fast-paced environment and manage multiple priorities. Self-motivation and the ability to work under minimal supervision.
Posted 1 week ago
1.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Description About Oracle Analytics & Big Data Service: Oracle Analytics is a complete platform that supports every role within analytics, offering cloud-native services or on-premises solutions without compromising security or governance. Our platform delivers a unified system for managing everything from data collection to decision-making, with seamless integration of AI and machine learning to help businesses accelerate productivity and uncover critical insights. Oracle Big Data Service, a part of Oracle Analytics, is a fully managed, automated cloud service designed to help enterprises create scalable Hadoop-based data lakes. The service work scope encompasses not just good integration with OCI’s native infrastructure (security, cloud, storage, etc.) but also deep integration with other relevant cloud-native services in OCI. It includes doing cloud-native ways of doing service level patching & upgrades and maintaining high availability of the service in the face of random failures & planned downtimes in the underlying infrastructure (e.g., for things like patching the Linux kernels to take care of a security vulnerability). Developing systems for monitoring and getting telemetry into the service’s runtime characteristics and being able to take actions on the telemetry data is a part of the charter. We are interested in experienced engineers with expertise and passion for solving difficult problems in distributed systems and highly available services to join our Oracle Big Data Service team. In this role, you will be instrumental in building, maintaining, and enhancing our managed, cloud-native Big Data service focused on large-scale data processing and analytics. At Oracle, you can help, shape, design, and build innovative new systems from the ground up. These are exciting times in our space - we are growing fast, still at an early stage, and working on ambitious new initiatives. Engineers at any level can have significant technical and business impact. Minimum Qualifications: Bachelor’s or Master’s degree in Computer Science, Engineering, or related technical field. Minimum of 1-2 years of experience in software development, with a focus on large-scale distributed systems, cloud services, or Big Data technologies. US passport holders. This is required by the position to access US Gov regions. Expertise in coding in Java, Python with emphasis on tuning/optimization Experience with Linux systems administration, troubleshooting, and security best practices in cloud environments. Experience with open-source software in the Big Data ecosystem Experience at an organization with operational/dev-ops culture Solid understanding of networking, storage, and security components related to cloud infrastructure. Solid foundation in data structures, algorithms, and software design with strong analytical and debugging skills. Preferred Qualifications: Hands-on experience with Hadoop ecosystem (HDFS, MapReduce, YARN), Spark, Kafka, Flink and other big data technologies. Proven expertise in cloud-native architectures and services, preferably within Oracle Cloud Infrastructure (OCI), AWS, Azure, or GCP. In-depth understanding of Java and JVM mechanics Good problem-solving skills and the ability to work in a fast-paced, agile environment. Responsibilities Key Responsibilities: Participate in development and maintenance of a scalable and secure Hadoop-based data lake service. Code, integrate, and operationalize open and closed source data ecosystem components for Oracle cloud service offerings Collaborate with cross-functional teams including DevOps, Security, and Product Management to define and execute product roadmaps, service updates, and feature enhancements. Becoming an active member of the Apache open source community when working on open source components Ensure compliance with security protocols and industry best practices when handling large-scale data processing in the cloud. Qualifications Career Level - IC2 About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.
Posted 1 week ago
0 years
0 Lacs
Delhi, India
On-site
Description Skills Required: Bash/Shell scripting Git Hub ETL Apache Spark Data validation strategies Docker & Kubernetes (for containerized deployments) Monitoring tools: Prometheus, Grafana Strong in python Grafana-Prometheus, PowerBI/Tableau (important) Requirements Extensive hands-on experience implementing data migration and data processing Strong Experience implementing ETL/ELT processes and building data pipelines including workflow management, job scheduling and monitoring Experience with building and implementing Big Data platforms On-Prem or On Cloud, covering ingestion (Batch and Real-time), processing (Batch and real-time), Polyglot Storage, Data Access Good understanding of Data Warehouse, Data Governance, Data Security, Data Compliance, Data Quality, Meta Data Management, Master Data Management, Data Catalog Proven understanding and demonstrable implementation experience of big data platform technologies on the cloud (AWS and Azure) including surrounding services like IAM, SSO, Cluster monitoring, Log Analytics, etc. Experience with source code management tools such as TFS or Git Knowledge of DevOps with CICD pipeline setup and automate Building and integrating systems to meet the business needs Defining features, phases, and solution requirements and providing specifications accordingly Experience building stream-processing systems, using solutions such as Azure Even Hub/ Kafka etc. Strong experience with data modeling and schema design Strong knowledge in SQL and no-sql Database and/or BI/DW. Excellent interpersonal and teamwork skills Experience With Leading And Mentorship Of Other Team Members Good knowledge of Agile Scrum Good communication skills Strong analytical, logic and quantitative ability. Takes ownership of a task. Values accountability and responsibility. Quick learner Job responsibilities ETL/ELT processes, data pipelines, Big Data platforms (On-Prem/Cloud), data ingestion (Batch/Real-time), data processing, Polyglot Storage, Data Governance, Cloud (AWS/Azure), IAM, SSO, Cluster monitoring, Log Analytics, source code management (Git/TFS), DevOps, CICD automation, stream processing (Kafka, Azure Event Hub), data modeling, schema design, SQL/NoSQL, BI/DW, Agile Scrum, team leadership, communication, analytical skills, ownership, quick learner What we offer Culture of caring. At GlobalLogic, we prioritize a culture of caring. Across every region and department, at every level, we consistently put people first. From day one, you’ll experience an inclusive culture of acceptance and belonging, where you’ll have the chance to build meaningful connections with collaborative teammates, supportive managers, and compassionate leaders. Learning and development. We are committed to your continuous learning and development. You’ll learn and grow daily in an environment with many opportunities to try new things, sharpen your skills, and advance your career at GlobalLogic. With our Career Navigator tool as just one example, GlobalLogic offers a rich array of programs, training curricula, and hands-on opportunities to grow personally and professionally. Interesting & meaningful work. GlobalLogic is known for engineering impact for and with clients around the world. As part of our team, you’ll have the chance to work on projects that matter. Each is a unique opportunity to engage your curiosity and creative problem-solving skills as you help clients reimagine what’s possible and bring new solutions to market. In the process, you’ll have the privilege of working on some of the most cutting-edge and impactful solutions shaping the world today. Balance and flexibility. We believe in the importance of balance and flexibility. With many functional career areas, roles, and work arrangements, you can explore ways of achieving the perfect balance between your work and life. Your life extends beyond the office, and we always do our best to help you integrate and balance the best of work and life, having fun along the way! High-trust organization. We are a high-trust organization where integrity is key. By joining GlobalLogic, you’re placing your trust in a safe, reliable, and ethical global company. Integrity and trust are a cornerstone of our value proposition to our employees and clients. You will find truthfulness, candor, and integrity in everything we do. About GlobalLogic GlobalLogic, a Hitachi Group Company, is a trusted digital engineering partner to the world’s largest and most forward-thinking companies. Since 2000, we’ve been at the forefront of the digital revolution – helping create some of the most innovative and widely used digital products and experiences. Today we continue to collaborate with clients in transforming businesses and redefining industries through intelligent products, platforms, and services.
Posted 1 week ago
1.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Description About Oracle Analytics & Big Data Service: Oracle Analytics is a complete platform that supports every role within analytics, offering cloud-native services or on-premises solutions without compromising security or governance. Our platform delivers a unified system for managing everything from data collection to decision-making, with seamless integration of AI and machine learning to help businesses accelerate productivity and uncover critical insights. Oracle Big Data Service, a part of Oracle Analytics, is a fully managed, automated cloud service designed to help enterprises create scalable Hadoop-based data lakes. The service work scope encompasses not just good integration with OCI’s native infrastructure (security, cloud, storage, etc.) but also deep integration with other relevant cloud-native services in OCI. It includes doing cloud-native ways of doing service level patching & upgrades and maintaining high availability of the service in the face of random failures & planned downtimes in the underlying infrastructure (e.g., for things like patching the Linux kernels to take care of a security vulnerability). Developing systems for monitoring and getting telemetry into the service’s runtime characteristics and being able to take actions on the telemetry data is a part of the charter. We are interested in experienced engineers with expertise and passion for solving difficult problems in distributed systems and highly available services to join our Oracle Big Data Service team. In this role, you will be instrumental in building, maintaining, and enhancing our managed, cloud-native Big Data service focused on large-scale data processing and analytics. At Oracle, you can help, shape, design, and build innovative new systems from the ground up. These are exciting times in our space - we are growing fast, still at an early stage, and working on ambitious new initiatives. Engineers at any level can have significant technical and business impact. Minimum Qualifications: Bachelor’s or Master’s degree in Computer Science, Engineering, or related technical field. Minimum of 1-2 years of experience in software development, with a focus on large-scale distributed systems, cloud services, or Big Data technologies. US passport holders. This is required by the position to access US Gov regions. Expertise in coding in Java, Python with emphasis on tuning/optimization Experience with Linux systems administration, troubleshooting, and security best practices in cloud environments. Experience with open-source software in the Big Data ecosystem Experience at an organization with operational/dev-ops culture Solid understanding of networking, storage, and security components related to cloud infrastructure. Solid foundation in data structures, algorithms, and software design with strong analytical and debugging skills. Preferred Qualifications: Hands-on experience with Hadoop ecosystem (HDFS, MapReduce, YARN), Spark, Kafka, Flink and other big data technologies. Proven expertise in cloud-native architectures and services, preferably within Oracle Cloud Infrastructure (OCI), AWS, Azure, or GCP. In-depth understanding of Java and JVM mechanics Good problem-solving skills and the ability to work in a fast-paced, agile environment. Responsibilities Key Responsibilities: Participate in development and maintenance of a scalable and secure Hadoop-based data lake service. Code, integrate, and operationalize open and closed source data ecosystem components for Oracle cloud service offerings Collaborate with cross-functional teams including DevOps, Security, and Product Management to define and execute product roadmaps, service updates, and feature enhancements. Becoming an active member of the Apache open source community when working on open source components Ensure compliance with security protocols and industry best practices when handling large-scale data processing in the cloud. Qualifications Career Level - IC2 About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.
Posted 1 week ago
1.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description About Oracle Analytics & Big Data Service: Oracle Analytics is a complete platform that supports every role within analytics, offering cloud-native services or on-premises solutions without compromising security or governance. Our platform delivers a unified system for managing everything from data collection to decision-making, with seamless integration of AI and machine learning to help businesses accelerate productivity and uncover critical insights. Oracle Big Data Service, a part of Oracle Analytics, is a fully managed, automated cloud service designed to help enterprises create scalable Hadoop-based data lakes. The service work scope encompasses not just good integration with OCI’s native infrastructure (security, cloud, storage, etc.) but also deep integration with other relevant cloud-native services in OCI. It includes doing cloud-native ways of doing service level patching & upgrades and maintaining high availability of the service in the face of random failures & planned downtimes in the underlying infrastructure (e.g., for things like patching the Linux kernels to take care of a security vulnerability). Developing systems for monitoring and getting telemetry into the service’s runtime characteristics and being able to take actions on the telemetry data is a part of the charter. We are interested in experienced engineers with expertise and passion for solving difficult problems in distributed systems and highly available services to join our Oracle Big Data Service team. In this role, you will be instrumental in building, maintaining, and enhancing our managed, cloud-native Big Data service focused on large-scale data processing and analytics. At Oracle, you can help, shape, design, and build innovative new systems from the ground up. These are exciting times in our space - we are growing fast, still at an early stage, and working on ambitious new initiatives. Engineers at any level can have significant technical and business impact. Minimum Qualifications: Bachelor’s or Master’s degree in Computer Science, Engineering, or related technical field. Minimum of 1-2 years of experience in software development, with a focus on large-scale distributed systems, cloud services, or Big Data technologies. US passport holders. This is required by the position to access US Gov regions. Expertise in coding in Java, Python with emphasis on tuning/optimization Experience with Linux systems administration, troubleshooting, and security best practices in cloud environments. Experience with open-source software in the Big Data ecosystem Experience at an organization with operational/dev-ops culture Solid understanding of networking, storage, and security components related to cloud infrastructure. Solid foundation in data structures, algorithms, and software design with strong analytical and debugging skills. Preferred Qualifications: Hands-on experience with Hadoop ecosystem (HDFS, MapReduce, YARN), Spark, Kafka, Flink and other big data technologies. Proven expertise in cloud-native architectures and services, preferably within Oracle Cloud Infrastructure (OCI), AWS, Azure, or GCP. In-depth understanding of Java and JVM mechanics Good problem-solving skills and the ability to work in a fast-paced, agile environment. Responsibilities Key Responsibilities: Participate in development and maintenance of a scalable and secure Hadoop-based data lake service. Code, integrate, and operationalize open and closed source data ecosystem components for Oracle cloud service offerings Collaborate with cross-functional teams including DevOps, Security, and Product Management to define and execute product roadmaps, service updates, and feature enhancements. Becoming an active member of the Apache open source community when working on open source components Ensure compliance with security protocols and industry best practices when handling large-scale data processing in the cloud. Qualifications Career Level - IC2 About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.
Posted 1 week ago
14.0 years
0 Lacs
Pune, Maharashtra, India
On-site
The Applications Development Technology Lead Analyst is a senior level position responsible for establishing and implementing new or revised application systems and programs in coordination with the Technology team. The overall objective of this role is to lead applications systems analysis and programming activities. Responsibilities: Partner with multiple management teams to ensure appropriate integration of functions to meet goals as well as identify and define necessary system enhancements to deploy new products and process improvements Resolve variety of high impact problems/projects through in-depth evaluation of complex business processes, system processes, and industry standards Provide expertise in area and advanced knowledge of applications programming and ensure application design adheres to the overall architecture blueprint Utilize advanced knowledge of system flow and develop standards for coding, testing, debugging, and implementation Develop comprehensive knowledge of how areas of business, such as architecture and infrastructure, integrate to accomplish business goals Provide in-depth analysis with interpretive thinking to define issues and develop innovative solutions Serve as advisor or coach to mid-level developers and analysts, allocating work as necessary Appropriately assess risk when business decisions are made, demonstrating particular consideration for the firm's reputation and safeguarding Citigroup, its clients and assets, by driving compliance with applicable laws, rules and regulations, adhering to Policy, applying sound ethical judgment regarding personal behavior, conduct and business practices, and escalating, managing and reporting control issues with transparency. Qualifications: 14+ years of relevant experience in Apps Development or systems analysis role Extensive experience system analysis and in programming of software applications Experience in managing and implementing successful projects Subject Matter Expert (SME) in at least one area of Applications Development Ability to adjust priorities quickly as circumstances dictate Demonstrated leadership and project management skills Consistently demonstrates clear and concise written and verbal communication Education: Bachelor’s degree/University degree or equivalent experience Master’s degree preferred This job description provides a high-level review of the types of work performed. Other job-related duties may be assigned as required. Knowledge/Experience: 14+ years of industry experience Experience of Agile development and scrums Strong knowledge on Core Java, Spring(Core, Boot etc), Expertise in Web API implementations (Web services, Restful services etc.) . Good understanding of Linux or Unix operating systems. Strong knowledge on build (Ant/Maven), continuous integration (Jenkins), code quality analysis (SonarQube) and unit and integration testing (JUnit) Exposure to SCM tool like bitbucket . Candidates with strong knowledge of Docker / Kubernetes / OpenShift.. Strong knowledge of distributed messaging platforms like (Apache Kafka, RabbitMQ etc) Good understanding of No SQL database like Mongo Db. Skills: Hands on coding experience on Core Java and Spring Hands on coding experience in python is a plus. Strong analysis and design skills including OO design patterns Solid understanding of SOA concepts, RESTful API design Ability to produce professional, technically-sound, and visually-appealing presentations and architecture designs Experience creating high level technical/process documentation and presentations for audiences at various levels. Experience writing/editing technical, business, and process documentation in an Information Technology/Engineering environment Must be able to understand requirements & convert to technical design and code Knowledge of source code control systems, unit test framework, build and deployment tools Experienced with large scale programs rollout and ability to create and maintain detailed WBS and project plans. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Applications Development ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Most Relevant Skills Please see the requirements listed above. ------------------------------------------------------ Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster.
Posted 1 week ago
5.0 - 8.0 years
0 Lacs
Pune, Maharashtra, India
On-site
About VOIS VOIS (Vodafone Intelligent Solutions) is a strategic arm of Vodafone Group Plc, creating value and enhancing quality and efficiency across 28 countries, and operating from 7 locations: Albania, Egypt, Hungary, India, Romania, Spain and the UK. Over 29,000 highly skilled individuals are dedicated to being Vodafone Group’s partner of choice for talent, technology, and transformation. We deliver the best services across IT, Business Intelligence Services, Customer Operations, Business Operations, HR, Finance, Supply Chain, HR Operations, and many more. Established in 2006, VOIS has evolved into a global, multi-functional organization, a Centre of Excellence for Intelligent Solutions focused on adding value and delivering business outcomes for Vodafone. VOIS India In 2009, VOIS started operating in India and now has established global delivery centers in Pune, Bangalore and Ahmedabad. With more than 14,500 employees, VOIS India supports global markets and group functions of Vodafone and delivers best-in-class customer experience through multi-functional services in the areas of Information Technology, Networks, Business Intelligence and Analytics, Digital Business Solutions (Robotics & AI), Commercial Operations (Consumer & Business), Intelligent Operations, Finance Operations, Supply Chain Operations and HR Operations and more. Role Purpose Mode : Hybrid Location : Pune Experience : 5 to 8 years Core Competencies, Knowledge And Experience 5-7 years’ experience in managing large data sets, simulation/ optimization and distributed computing tools. Excellent communication & presentation skills with track record of engaging with business project leads. Role Purpose Primary responsibility is to define data lifecycle, including data models and data sources for analytics platform, gathering data from business and cleaning them in order to provide ready-to-work inputs for Data Scientists Apply strong expertise in in automating end to end data science pipelines & big data pipelines (Collect, ingest, store , transform and optimize scale) The incumbent will work on the assigned projects & it's stakeholder alongside Data Scientists to understand the business challenges faced by them. The work involves working with large data sets, simulation/ optimization and distributed computing tools. The candidate works with the assigned business stakeholder(s) to agree scope, deliverables, process and expected outcomes from the products and services developed. Must Have Technical / Professional Qualifications Experience working with large data sets, simulation/ optimization and distributed computing tools Experience in transformation data with Apache spark for Data Science activities Experience in working with distributed storage on cloud (AWS/GCP) or HDFS Experience in building data pipelines with Airflow Experience in ingesting data from different sources using Kafka/Sqoop/Flume/ Nifi Experience in solving simple to complex big data platform/framework issues Experience in building real time analytics system with Apache Spark, Flink & Kafka Experience in Scala, Python, Java & R Experience in working with NoSQL databases (Cassandra, Mongo DB, HBase, Redis) Key Accountabilities And Decision Ownership Understand the data science problems and design & schedule end to end pipelines For the given problem identify the right big data technologies to solve the problem in an optimized way Automate the data science pipelines, deploy ML algorithms and track the performance Build customer 360, feature store for different machine learning problems Build data model for machine learning feature store on high velocity, flexible schema databases VOIS Equal Opportunity Employer Commitment VOIS is proud to be an Equal Employment Opportunity Employer. We celebrate differences and we welcome and value diverse people and insights. We believe that being authentically human and inclusive powers our employees’ growth and enables them to create a positive impact on themselves and society. We do not discriminate based on age, color, gender (including pregnancy, childbirth, or related medical conditions), gender identity, gender expression, national origin, race, religion, sexual orientation, status as an individual with a disability, or other applicable legally protected characteristics. As a result of living and breathing our commitment, our employees have helped us get certified as a Great Place to Work in India for four years running. We have been also highlighted among the Top 5 Best Workplaces for Diversity, Equity, and Inclusion, Top 10 Best Workplaces for Women, Top 25 Best Workplaces in IT & IT-BPM and 14th Overall Best Workplaces in India by the Great Place to Work Institute in 2023. These achievements position us among a select group of trustworthy and high-performing companies which put their employees at the heart of everything they do. By joining us, you are part of our commitment. We look forward to welcoming you into our family which represents a variety of cultures, backgrounds, perspectives, and skills! Apply now, and we’ll be in touch!
Posted 1 week ago
8.0 years
0 Lacs
Mumbai, Maharashtra, India
Remote
About This Role Are you interested in building innovative technology that crafts the financial markets? Do you like working at the speed of a startup, and solving some of the world’s most exciting challenges? Do you want to work with, and learn from, hands-on leaders in technology and finance? At BlackRock, we are looking for Software Engineers who like to innovate and solve sophisticated problems. We recognize that strength comes from diversity, and will embrace your outstanding skills, curiosity, and passion while giving you the opportunity to grow technically and as an individual. We invest and protect over $9 trillion (USD) of assets and have an extraordinary responsibility to our clients all over the world. Our technology empowers millions of investors to save for retirement, pay for college, buy a home, and improve their financial well-being. Being a technologist at BlackRock means you get the best of both worlds: working for one of the most sophisticated financial companies and being part of a software development team responsible for next generation technology and solutions. What are Aladdin and Aladdin Engineering? You will be working on BlackRock's investment operating system called Aladdin. Aladdin is used both internally within BlackRock and externally by many financial institutions. Aladdin combines sophisticated risk analytics with comprehensive portfolio management, trading, and operations tools on a single platform to power informed decision-making and create a connective tissue for thousands of users investing worldwide. Our development teams reside inside the Aladdin Engineering group. We collaboratively build the next generation of technology that changes the way information, people, and technology intersect for global investment firms. We build and package tools that manage trillions in assets and supports millions of financial instruments. We perform risk calculations and process millions of transactions for thousands of users every day worldwide! Being a Member Of Aladdin Engineering, You Will Be Tenacious: Work in a fast paced and highly complex environment Creative thinker: Analyse multiple solutions and deploy technologies in a flexible way. Great teammate: Think and work collaboratively and communicate effectively. Fast learner: Pick up new concepts and apply them quickly. Responsibilities Include Collaborate with team members in a multi-office, multi-country environment. Deliver high efficiency, high availability, concurrent and fault tolerant software systems. Significantly contribute to development of Aladdin’s global, multi-asset trading platform. Work with product management and business users to define the roadmap for the product. Design and develop innovative solutions to complex problems, identifying issues and roadblocks. Apply validated quality software engineering practices through all phases of development. Ensure resilience and stability through quality code reviews, unit, regression and user acceptance testing, dev ops and level two production support. Be a leader with vision and a partner in brainstorming solutions for team productivity, efficiency, guiding and motivating others. Drive a strong culture by bringing principles of inclusion and diversity to the team and setting the tone through specific recruiting, management actions and employee engagement. For VP Level: In addition to the above, a VP level candidate should be able to lead individual projects priorities, deadlines and deliverables. Qualifications B.S. / M.S. degree in Computer Science, Engineering, or a related subject area B.E./ B.TECH./ MCA or any other relevant engineering degree from a reputed university. For VP Level: 8+ years of proven experience Skills And Experience A proven foundation in C++ and related technologies in a multiprocess distributed UNIX environment Knowledge of Java, Perl, and/or Python are a plus Track record building high quality software with design-focused and test-driven approaches Experience with working with an extensive legacy code base (e.g., C++ 98) Understanding of performance issues (memory, processing time, I/O, etc.) Understanding of relational databases is a must. Great analytical, problem-solving and communication skills Some experience or a real interest in finance, investment processes, and/or an ability to translate business problems into technical solutions. For VP Level: In addition to the above, a VP level candidate should have experience leading development teams, projects or being responsible for the design and technical quality of a significant application, system, or component. Ability to form positive relationships with partnering teams, sponsors, and user groups. Nice To Have And Opportunities To Learn Expertise in building distributed applications using SQL and/or NOSQL technologies like MS SQL, Sybase, Cassandra or Redis A real-world practitioner of applying cloud-native design patterns to event-driven microservice architectures. Exposure to high scale distributed technology like Kafka, Mongo, Ignite, Redis Exposure to building microservices and APIs ideally with REST, Kafka or gRPC Experience with optimization, algorithms or related quantitative processes. Experience with Cloud platforms like Microsoft Azure, AWS, Google Cloud Experience with cloud deployment technology (Docker, Ansible, Terraform, etc.) is also a plus. Experience with DevOps and tools like Azure DevOps Experience with AI-related projects/products or experience working in an AI research environment. Exposure to Docker, Kubernetes, and cloud services is beneficial. A degree, certifications or opensource track record that shows you have a mastery of software engineering principles. Our Benefits To help you stay energized, engaged and inspired, we offer a wide range of benefits including a strong retirement plan, tuition reimbursement, comprehensive healthcare, support for working parents and Flexible Time Off (FTO) so you can relax, recharge and be there for the people you care about. Our hybrid work model BlackRock’s hybrid work model is designed to enable a culture of collaboration and apprenticeship that enriches the experience of our employees, while supporting flexibility for all. Employees are currently required to work at least 4 days in the office per week, with the flexibility to work from home 1 day a week. Some business groups may require more time in the office due to their roles and responsibilities. We remain focused on increasing the impactful moments that arise when we work together in person – aligned with our commitment to performance and innovation. As a new joiner, you can count on this hybrid model to accelerate your learning and onboarding experience here at BlackRock. About BlackRock At BlackRock, we are all connected by one mission: to help more and more people experience financial well-being. Our clients, and the people they serve, are saving for retirement, paying for their children’s educations, buying homes and starting businesses. Their investments also help to strengthen the global economy: support businesses small and large; finance infrastructure projects that connect and power cities; and facilitate innovations that drive progress. This mission would not be possible without our smartest investment – the one we make in our employees. It’s why we’re dedicated to creating an environment where our colleagues feel welcomed, valued and supported with networks, benefits and development opportunities to help them thrive. For additional information on BlackRock, please visit @blackrock | Twitter: @blackrock | LinkedIn: www.linkedin.com/company/blackrock BlackRock is proud to be an Equal Opportunity Employer. We evaluate qualified applicants without regard to age, disability, family status, gender identity, race, religion, sex, sexual orientation and other protected attributes at law.
Posted 1 week ago
0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Description Skills Required: Bash/Shell scripting Git Hub ETL Apache Spark Data validation strategies Docker & Kubernetes (for containerized deployments) Monitoring tools: Prometheus, Grafana Strong in python Grafana-Prometheus, PowerBI/Tableau (important) Requirements Extensive hands-on experience implementing data migration and data processing Strong Experience implementing ETL/ELT processes and building data pipelines including workflow management, job scheduling and monitoring Experience with building and implementing Big Data platforms On-Prem or On Cloud, covering ingestion (Batch and Real-time), processing (Batch and real-time), Polyglot Storage, Data Access Good understanding of Data Warehouse, Data Governance, Data Security, Data Compliance, Data Quality, Meta Data Management, Master Data Management, Data Catalog Proven understanding and demonstrable implementation experience of big data platform technologies on the cloud (AWS and Azure) including surrounding services like IAM, SSO, Cluster monitoring, Log Analytics, etc. Experience with source code management tools such as TFS or Git Knowledge of DevOps with CICD pipeline setup and automate Building and integrating systems to meet the business needs Defining features, phases, and solution requirements and providing specifications accordingly Experience building stream-processing systems, using solutions such as Azure Even Hub/ Kafka etc. Strong experience with data modeling and schema design Strong knowledge in SQL and no-sql Database and/or BI/DW. Excellent interpersonal and teamwork skills Experience With Leading And Mentorship Of Other Team Members Good knowledge of Agile Scrum Good communication skills Strong analytical, logic and quantitative ability. Takes ownership of a task. Values accountability and responsibility. Quick learner Job responsibilities ETL/ELT processes, data pipelines, Big Data platforms (On-Prem/Cloud), data ingestion (Batch/Real-time), data processing, Polyglot Storage, Data Governance, Cloud (AWS/Azure), IAM, SSO, Cluster monitoring, Log Analytics, source code management (Git/TFS), DevOps, CICD automation, stream processing (Kafka, Azure Event Hub), data modeling, schema design, SQL/NoSQL, BI/DW, Agile Scrum, team leadership, communication, analytical skills, ownership, quick learner What we offer Culture of caring. At GlobalLogic, we prioritize a culture of caring. Across every region and department, at every level, we consistently put people first. From day one, you’ll experience an inclusive culture of acceptance and belonging, where you’ll have the chance to build meaningful connections with collaborative teammates, supportive managers, and compassionate leaders. Learning and development. We are committed to your continuous learning and development. You’ll learn and grow daily in an environment with many opportunities to try new things, sharpen your skills, and advance your career at GlobalLogic. With our Career Navigator tool as just one example, GlobalLogic offers a rich array of programs, training curricula, and hands-on opportunities to grow personally and professionally. Interesting & meaningful work. GlobalLogic is known for engineering impact for and with clients around the world. As part of our team, you’ll have the chance to work on projects that matter. Each is a unique opportunity to engage your curiosity and creative problem-solving skills as you help clients reimagine what’s possible and bring new solutions to market. In the process, you’ll have the privilege of working on some of the most cutting-edge and impactful solutions shaping the world today. Balance and flexibility. We believe in the importance of balance and flexibility. With many functional career areas, roles, and work arrangements, you can explore ways of achieving the perfect balance between your work and life. Your life extends beyond the office, and we always do our best to help you integrate and balance the best of work and life, having fun along the way! High-trust organization. We are a high-trust organization where integrity is key. By joining GlobalLogic, you’re placing your trust in a safe, reliable, and ethical global company. Integrity and trust are a cornerstone of our value proposition to our employees and clients. You will find truthfulness, candor, and integrity in everything we do. About GlobalLogic GlobalLogic, a Hitachi Group Company, is a trusted digital engineering partner to the world’s largest and most forward-thinking companies. Since 2000, we’ve been at the forefront of the digital revolution – helping create some of the most innovative and widely used digital products and experiences. Today we continue to collaborate with clients in transforming businesses and redefining industries through intelligent products, platforms, and services.
Posted 1 week ago
2.0 years
0 Lacs
Mumbai, Maharashtra, India
Remote
About This Role About the role: You can work with us at one of top FinTech companies. We sell our Aladdin platform to over 200 of the top global corporations, in total managing about quarter of all the world’s money under management. BlackRock is global but close-knit team of individuals who share a common goal of providing the very best possible level of support to our business partners and customers. From the top of the firm down, we embrace the diversity of values, identities and ideas brought by our employees. We are serious about our people and offer Flexible Time Off, collaborative working spaces and several other benefits. An individual selected this position will have the responsibility to cover business-critical compute workloads, real-time / interactive processing, data transfer services, application and new technology on-boarding and upgrades, and recovery procedures. The international team is split into 4 global regions to provide 24*7*365 support. Additional responsibilities may include developing more cost effective and predictable methods for supporting a growing technology infrastructure and working with internal Development Groups to manage application changes as they are released to production environments. Onboarding new technologies, assisting in proof of concept build outs and disaster recovery testing and planning. If any of this excites you, we want to talk to you. Team Overview The Service Management Operations Group is responsible for monitoring, supporting, and administering production environments for all BlackRock businesses (including subsidiaries and BlackRock Solutions clients) acting as a first responder relative to troubleshooting, problem resolution, and escalation. Collaborating with skilled professionals across the globe and managing a broad range of technologies and applications, the Operations Group delivers service quality and excellence through teamwork, innovating operational processes, and being part of the One BlackRock culture. Role Responsibility You will have complete ownership of ensuring that changes were fully completed, and any affected services restored. You will identify process improvements for change implementation and weekend checkouts; aid is incident management and root cause analysis. Provide ongoing operational support for the Aladdin infrastructure. Supporting and fix both batch processing and interactive user applications to ensure the high availability of the Aladdin Environment Uses various tools to conduct analysis on system performance, root cause diagnostics, and systems’/applications’ design to understand and improve the operating quality of production environments. Engage in clear and concise communications both verbally and in writing. Effectively interacts on incident bridges and calls to ensure all distributed team members are constantly informed. Engineer solutions to expedite recovery of environment post weekend maintenances. Weekend Shift Work: You might be required to work on the weekend shifts on rotational basis Qualifications 2-3 years of experience with a four-year degree specializing in Computer Science, MIS, Mathematics, Physics, or Engineering. Exposure to (or strong interest in) 1+ years of experience as Dev Operations Engineer Good understanding of Linux administration fundamentals; must be familiar with typical administrative commands. Prior system administration experience highly desirable. Programming experience in at least one of the following: Java, Python or Perl or shell scripting experience. Candidates must have a strong interest and skills for quickly learning new technologies and proprietary systems. Possess a positive demeanor and ability to work as a teammate in fast paced environment. Build opportunities to integrate and automate operational processes, procedures, and tooling. Experience in working with Cloud Native Platforms e.g., Azure, AWS, GCP etc. Pluses Prior experience with any of these technologies: Ansible, Chef, Jenkins, AWX, Service Now, Cutover, Autosys, Kafka, Kubernetes. Our Benefits To help you stay energized, engaged and inspired, we offer a wide range of benefits including a strong retirement plan, tuition reimbursement, comprehensive healthcare, support for working parents and Flexible Time Off (FTO) so you can relax, recharge and be there for the people you care about. Our hybrid work model BlackRock’s hybrid work model is designed to enable a culture of collaboration and apprenticeship that enriches the experience of our employees, while supporting flexibility for all. Employees are currently required to work at least 4 days in the office per week, with the flexibility to work from home 1 day a week. Some business groups may require more time in the office due to their roles and responsibilities. We remain focused on increasing the impactful moments that arise when we work together in person – aligned with our commitment to performance and innovation. As a new joiner, you can count on this hybrid model to accelerate your learning and onboarding experience here at BlackRock. About BlackRock At BlackRock, we are all connected by one mission: to help more and more people experience financial well-being. Our clients, and the people they serve, are saving for retirement, paying for their children’s educations, buying homes and starting businesses. Their investments also help to strengthen the global economy: support businesses small and large; finance infrastructure projects that connect and power cities; and facilitate innovations that drive progress. This mission would not be possible without our smartest investment – the one we make in our employees. It’s why we’re dedicated to creating an environment where our colleagues feel welcomed, valued and supported with networks, benefits and development opportunities to help them thrive. For additional information on BlackRock, please visit @blackrock | Twitter: @blackrock | LinkedIn: www.linkedin.com/company/blackrock BlackRock is proud to be an Equal Opportunity Employer. We evaluate qualified applicants without regard to age, disability, family status, gender identity, race, religion, sex, sexual orientation and other protected attributes at law.
Posted 1 week ago
2.0 - 3.0 years
0 Lacs
Vasai Virar, Maharashtra, India
On-site
What is Contentstack? Contentstack is on a mission to deliver the world's best digital experiences through a fusion of cutting-edge content management, customer data, personalization and AI technology. Iconic brands, such as AirFrance KLM, ASICS, Burberry, Mattel, Mitsubishi and Walmart, depend on the platform to rise above the noise in today's crowded digital markets and gain their competitive edge. Contentstack and its employees are dedicated to the customers and communities they serve. The company is recognized for its unmatched customer care and tradition of giving back globally through the Contentstack Cares program, including proud support of Pledge 1% and Girls Who Code. Learn more at www.contentstack.com. Who Are We? At Contentstack we are more than colleagues, we are a tribe. Our vision is to pursue equity among our communities, employees, partners, and customers. We are global-diverse yet close; distributed yet connected. We are dreamers and dreammakers who challenge the status quo. We do the right thing, even when no one is watching. We are curious trendspotters and brave trendsetters. Our mission is to make Contentstack indispensable for organizations to tell their stories and to connect with the people they care about through inspiring, modern experiences. We care deeply about our customers and the communities we serve. #OneTeamOneDream. Chalo, let's go! What Are We Looking For? Contentstack is looking for a Fullstack Engineer - ReactJS (Frontend) / NodeJS (Backend) who can work with our Editorial Experience. Roles & Responsibilities: Work across the stack, from a code commit to running it in production, with the end goal of delivering the best possible experience for the user Design, develop and test features from inception to rollout Write high quality code that is scalable, testable, maintainable and reliable Independently own and drive new features from scratch Work in an Agile environment and facilitate agile practices Champion best practices and cross-functional skill development Required skill sets: 2-3 years of product and application development experience Experience working with React JS on the frontend and NodeJS on the backend Working experience with any NoSQL databases like MongoDB, DynamoDB or Redis or PostgreSQL Good experience and understanding of working with Microservice-based Architecture. Good knowledge of AWS, Kubernetes, Kafka, GraphQL, GRPC, etc is preferred Experience with frameworks like ExpressJS, NestJS, Redux, Redux Saga, Storybook etc is preferred Past experience tackling scaling issues is preferred Experience practicing Agile software development methods is preferred. Flexible and curious in adapting to new technologies and trends. Experience : 2-3 years Location: Virar, Vassai Skills: React.JS, NodeJS, NoSQL (MongoDB or Redis) What Do We Offer? Interesting Work | We hire curious trendspotters and brave trendsetters. This is NOT your boring, routine, cushy, rest-and-vest corporate job. This is the "challenge yourself" role where you learn something new every day, never stop growing, and have fun while you're doing it. Tribe Vibe | We are more than colleagues, we are a tribe. We have a strict "no a**hole policy" and enforce it diligently. This means we spend time together - with spontaneous office happy hours, organized outings, and community volunteer opportunities. We are a diverse and distributed team, but we like to stay connected. Bragging Rights | We are dreamers and dream makers. Our efforts pay off and we work with the most prestigious brands, from big-name retailers to airlines, to professional sports teams. Your contribution will make an impact with many of the most recognizable names in almost every industry including AirFrance KLM, ASICS, Burberry, Mattel, Mitsubishi, Walmart, and many more! One Team One Dream | This is one of our values, and it shows. We don't believe in artificial hierarchies. If you're part of the tribe, you have an opportunity to contribute. Your voice will be heard and you will also receive regular updates about the business and its performance. Which, btw, is through the roof, so it's a great time to be joining… To review our Privacy Policy, please click here.
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Join our Team About this opportunity: Ericsson invites applications for the role of DevOps Engineer. In this challenging and fulfilled position, you will be responsible for performing the detailed design of application and technical architecture components and classes according to the specification provided by the System Architect. The role also involves coding Software components and contributing to the early testing phases, as well as extending your support towards system testing. Responsibilities Include: Design and build automated pipelines for media ingest processing and distribution. Implement and maintain CI/CD workflows tailored for media-centric applications and services. Architect and manage scalable cloud infrastructure (AWS) for high-availability media pipelines Working on ways to automate and improve development and release processes Implement robust monitoring/logging to ensure system reliability and performances Implement DevSecOps best practices in pipeline management and cloud access control. Also ensuring that systems are safe and secure against cybersecurity threats. Work cross functionally with Software engineers, broadcast teams and operations to align on technical requirements Mentor junior engineers and help establish best practices for DevOps in media environments. Testing and examining code written by others and analyzing results Develop internal tools, scripts (Java, Python, Bash, Node.js) and use Cloud formation or similar tools to streamline media integration tasks and infrastructure as code (IaC) Able to assist/perform software upgrades/migrations in project. Maintain comprehensive documentation of pipelines, architectures and integration touchpoints in confluence. Provide reports and analysis on cost optimization, system performance- optional Provide training sessions and documentation to operations and support teams for new solutions. Identify areas of improvement in existing workflows and contribute to strategic enhancements. Plan, implement, and manage changes, adhering to established change control processes. Stay updated with industry trends and emerging technologies to improve solution design and delivery Technical Requirement Must Have: Strong AWS services knowledge (EC2, S3, Lambda, RDS, etc.). Expertise in CI/CD pipelines (Jenkins, Sonar, Git etc.). Proficiency in container technologies, with a focus on Kubernetes. Experience with serverless, Kafka, Elasticsearch. Strong programming skills in Python or scripting languages. Experience with monitoring & logging tools (CloudWatch, ELK) Supportive: Hands-on experience with database administration and tuning like graph DB, Dynamo DB Good to have: Understanding of various IP networking and common protocols such as FTP, SFT Knowledge of broadcast video formats, protocols, and encoding standards. Core Competencies: Agile ways of working Good Communication Skill. Proficiency with the English Language Flexibility to work in different time zones Fast learner and good team player Must have a positive approach to change, the ability to understand other cultures and the ability to adapt to, benefit from and respect cultural differences. Qualification and Experience: 5-9 Years relevant experience in IT Industry Bachelor’s degree in computer engineering/information technology or equivalent Why join Ericsson? At Ericsson, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build solutions never seen before to some of the world’s toughest problems. You´ll be challenged, but you won’t be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next. What happens once you apply? Click Here to find all you need to know about what our typical hiring process looks like. Encouraging a diverse and inclusive organization is core to our values at Ericsson, that's why we champion it in everything we do. We truly believe that by collaborating with people with different experiences we drive innovation, which is essential for our future growth. We encourage people from all backgrounds to apply and realize their full potential as part of our Ericsson team. Ericsson is proud to be an Equal Opportunity Employer. learn more. Primary country and city: India (IN) || Chennai Req ID: 770407
Posted 1 week ago
8.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Key Responsibilities Leadership & Mentoring Lead a team of Java developers, providing guidance, mentorship, and technical expertise. Facilitate effective communication across teams and stakeholders, ensuring alignment on project goals. Conduct code reviews, ensuring high-quality standards, and provide constructive feedback. Collaborate with Product Managers, Architects, and other stakeholders to define technical requirements. Design & Architecture Design and implement scalable, maintainable, and high-performance Java applications. Define and maintain application architecture, ensuring consistency and scalability. Lead architectural discussions and decisions, ensuring solutions meet business requirements and technical specifications. Development & Coding Write clean, efficient, and reusable Java code using best practices. Ensure that solutions adhere to coding standards and follow industry best practices for performance, security, and scalability. Develop RESTful APIs and integrate third-party services and applications. Leverage Java frameworks and tools such as Spring, Hibernate, and Maven to build applications. Continuous Improvement Drive continuous improvement in development processes, tools, and methodologies. Keep up to date with new technologies, frameworks, and tools in the Java ecosystem and evaluate their potential benefits. Promote DevOps practices and help implement automated testing and CI/CD pipelines. Problem Solving & Troubleshooting Analyze and troubleshoot issues in production environments. Optimize existing systems and resolve performance bottlenecks. Ensure that solutions are designed with reliability, maintainability, and extensibility in mind. Qualifications Bachelor's or Master's degree in Computer Science, Engineering, or a related field (or equivalent work experience). 8+ years of experience in software development with a strong focus on Java and related technologies. Proven experience as a Tech Lead, Senior Developer, or Software Engineer in Java-based application development. Expertise in Java frameworks like Spring, Hibernate, and Spring Boot. Experience with microservices architecture and cloud platforms. Strong Experience in Kafka, RabbitMQ, Postgres. Strong knowledge of RESTful APIs, databases (SQL/NoSQL), and caching technologies (Redis, Memcached). Familiarity with tools such as Maven, Git, Docker, and Kubernetes. Experience with Agile development methodologies (Scrum/Kanban). Strong analytical and problem-solving skills, with a passion for delivering high-quality software solutions. Excellent communication and leadership skills, with the ability to mentor and collaborate with cross-functional teams. Skills: maven,sql,redis,restful apis,aws,git,leadership,elasticsearch,spring boot,rabbitmq,microservices,problem-solving,cloud platforms,postgres,kafka,memcached,docker,devops,kubernetes,hibernate,java,agile methodologies,spring,sql/nosql databases,nosql,mentoring
Posted 1 week ago
12.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
JR0125187 Manager, Solution Engineering– Hyderabad, India Are you ready to join a global organization that helps diverse teams stay at the forefront of technology and innovation? How about offering up your skills in a global business that is committed to moving money for better? Join Western Union as Manager, Solution Engineering. Western Union powers your pursuit. As a Manager, you will manage our Cross-channel platform engineering team and contribute towards the new API’s development for Enterprise level initiative in Compliance orchestration platform. This role is important for expanding our existing product capabilities, improving customer experience and accelerating the launch of new products and services. You will be participating in designing and building scalable, high-performance APIs that drive innovation and efficiency across our KYC and Compliance ecosystem. You would be performing below - Role Responsibilities Planning & Delivery: Lead the planning and execution of program phases, ensuring alignment with strategic objectives and timelines Partner with functional leaders to define and prioritize program deliverables, ensuring a focus on delivering measurable business value Oversee the development and tracking of transition plans Develop and deliver clear, concise, and timely program status updates to all stakeholders, including executive-level reports Identify and address communication gaps, proactively manage issues, and provide support to teams navigating conflicting priorities Provide expert advice, coaching, and mentorship to leads and team members. Mentor Developers in the team while fostering a collaborative team environment. Collaborate with stakeholders, across Product and Technology, to define and deliver technical solutions. Hands-on & Driving architecture simplification and consolidation of platforms to flexible and scalable & complaint solutions. Role Requirements Strong experience in managing the teams with a focus on API development and microservices architectures implementations. 12+ years of progressive experience in program management, with a proven track record of leading large-scale, complex transformation programs Strong background experience in Java, Spring Boot, Microservices, REST API, Sprint Batch, Core Java, Kafka. Event Driven Architecture experience. Strong knowledge of AWS and experience in developing cloud-based Java applications in AWS. Strong hands-on experience with Kubernetes for container orchestration, including cluster management and application deployment. Proven ability to lead and motivate cross-functional, global teams in a matrixed environment Excellent communication, presentation, and stakeholder management skills Experience working in agile, waterfall, and hybrid project management environments Proficiency in project management tools, such as Jira, Jira Align, and Confluence PMP, Agile, SAFe, or other relevant certifications are highly preferred Experience with onsite and offshore teams Translate/analyze requirements and document and communicate detailed solution approach by using suitable tools, techniques, templates and diagram. Experience with large scale workforce transformation Strong troubleshooting, problem-solving, and diagnostic skills. Experience in managing the KYC, Compliance, Vendor Integration in a Banking/Payment environment will be plus. Strong communication skills with ability to interact with internal and external partners globally. We make financial services accessible to humans everywhere. Join us for what’s next. Western Union is positioned to become the world’s most accessible financial services company —transforming lives and communities. We’re a diverse and passionate customer-centric team of over 8,000 employees serving 200 countries and territories, reaching customers and receivers around the globe. More than moving money, we design easy-to-use products and services for our digital and physical financial ecosystem that help our customers move forward. Just as we help our global customers prosper, we support our employees in achieving their professional aspirations. You’ll have plenty of opportunities to learn new skills and build a career, as well as receive a great compensation package. If you’re ready to help drive the future of financial services, it’s time for Western Union. Learn more about our purpose and people at https://careers.westernunion.com/. Benefits You will also have access to short-term incentives, multiple health insurance options, accident and life insurance, and access to best-in-class development platforms, to name a few (https://careers.westernunion.com/global-benefits/). Please see the location-specific benefits below and note that your Recruiter may share additional role-specific benefits during your interview process or in an offer of employment. Your India-specific Benefits Include Employees Provident Fund [EPF] Gratuity Payment Public holidays Annual Leave, Sick leave, Compensatory leave, and Maternity / Paternity leave Annual Health Checkup Hospitalization Insurance Coverage (Mediclaim) Group Life Insurance, Group Personal Accident Insurance Coverage, Business Travel Insurance Relocation Benefit Western Union values in-person collaboration, learning, and ideation whenever possible. We believe this creates value through common ways of working and supports the execution of enterprise objectives which will ultimately help us achieve our strategic goals. By connecting face-to-face, we are better able to learn from our peers, problem-solve together, and innovate. Our Hybrid Work Model categorizes each role into one of three categories. Western Union has determined the category of this role to be Hybrid. This is defined as a flexible working arrangement that enables employees to divide their time between working from home and working from an office location. The expectation is to work from the office a minimum of three days a week. We are passionate about diversity. Our commitment is to provide an inclusive culture that celebrates the unique backgrounds and perspectives of our global teams while reflecting the communities we serve. We do not discriminate based on race, color, national origin, religion, political affiliation, sex (including pregnancy), sexual orientation, gender identity, age, disability, marital status, or veteran status. The company will provide accommodation to applicants, including those with disabilities, during the recruitment process, following applicable laws. Estimated Job Posting End Date 08-08-2025 This application window is a good-faith estimate of the time that this posting will remain open. This posting will be promptly updated if the deadline is extended or the role is filled.
Posted 1 week ago
12.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
We are looking for a Senior Manager to lead our K8s and VM challenges in the management plane team. The management plane is a web-based application crafted to provide our storage customers the capabilities to handle and monitor our distributed storage infrastructure. Our team is continually dedicated to acquiring and implementing ground breaking technologies to overcome obstacles and innovate solutions for enhancing our ability to manage large clusters of machines efficiently. What You Will Be Doing Manage a team of Senior developers. Design, develop and maintain Kubernetes operators and our Container Storage Interface (CSI) plugin. Develop a web-based solution that manages, operates and monitors our distributed storage. Work closely with other teams to define and implement new APIs. What We Need To See B.Sc., M.Sc. or Ph.D. in Computer Science, or related field, or equivalent experience. 12+ years of experience in web development ( both client and server ) and 3+ years of experience in people management Proven experience with Kubernetes (K8s), including developing or maintaining operators and/or CSI plugins. Experience scripting with Python, Bash or similar At least 5 years of experience working in a Linux OS environment Ways To Stand Out From The Crowd NodeJS for the server side: dominant modules are async & express Kafka, MongoDB, K8s Kata Containers, KubeVirt JavaScript frameworks: React, jQuery, c3js Scripting: Python and Bash as well as Git and Linux NVIDIA has continuously reinvented itself over two decades. NVIDIA’s invention of the GPU in 1999 sparked the growth of the PC gaming market, redefined modern computer graphics, and revolutionized parallel computing. More recently, GPU deep learning ignited modern AI — the next era of computing — with the GPU acting as the brain of computers, robots, and self-driving cars that can perceive and understand the world. This is our life’s work — to amplify human imagination and intelligence. With highly competitive salaries and a comprehensive benefits package, NVIDIA is widely considered to be one of the technology world’s most desirable employers. We have some of the most brilliant and talented people in the world working for us and, due to extraordinary growth, our elite engineering teams are fast-growing fast. If you're a creative and autonomous manager with a sincere real passion for technology, we want to hear from you. JR1999682
Posted 1 week ago
3.0 years
0 Lacs
India
Remote
Location: Dallas, TX [Remote] Job Type: Full-time Contract Rate: USD $30/hour Job Summary: We are seeking an experienced IBM App Connect Enterprise (ACE) Developer to join our integration team at a Fortune 50 client. The ideal candidate will be responsible for designing, developing, and maintaining integration solutions using IBM ACE (formerly IBM Integration Bus). This role requires deep knowledge of integration patterns, APIs, and enterprise messaging systems, as well as hands-on experience in building scalable and secure integration services. Key Responsibilities: Design, develop, test, and deploy integration flows using IBM App Connect Enterprise (v11 or v12) Implement message flows, sub-flows, ESQL transformations, and REST/SOAP web services Integrate with various backend systems such as SAP, Salesforce, and databases using protocols like HTTP, MQ, JDBC, and FTP Develop and manage message models using DFDL, XML, JSON, and XSD Build reusable assets, templates, and patterns to accelerate integration delivery Collaborate with architecture, security, and DevOps teams to ensure solutions meet enterprise standards Troubleshoot and resolve issues related to performance, message delivery, and data integrity Document technical designs, integration logic, and deployment procedures Required Skills & Qualifications: 3+ years of experience in developing with IBM App Connect Enterprise (ACE) / IBM Integration Bus (IIB) Strong proficiency in ESQL, Java, and integration design patterns Experience with IBM MQ, Kafka, or other messaging systems Solid understanding of RESTful and SOAP services, including OpenAPI/Swagger Hands-on experience with DFDL, XML, JSON, XSLT Experience working in CI/CD environments with tools such as Git, Jenkins, and UrbanCode Deploy Familiarity with containerization (Docker, Kubernetes) and cloud deployments (Azure, AWS, GCP) Strong problem-solving and debugging skills Preferred Qualifications: IBM Certified Developer – App Connect Enterprise certification Experience with hybrid cloud integrations Knowledge of event-driven architecture and API gateways Understanding of data security and encryption practices in integration
Posted 1 week ago
5.0 years
0 Lacs
India
On-site
Precisely is the leader in data integrity. We empower businesses to make more confident decisions based on trusted data through a unique combination of software, data enrichment products and strategic services. What does this mean to you? For starters, it means joining a company focused on delivering outstanding innovation and support that helps customers increase revenue, lower costs and reduce risk. In fact, Precisely powers better decisions for more than 12,000 global organizations, including 93 of the Fortune 100. Precisely's 2500 employees are unified by four company core values that are central to who we are and how we operate: Openness, Determination, Individuality, and Collaboration. We are committed to career development for our employees and offer opportunities for growth, learning and building community. With a "work from anywhere" culture, we celebrate diversity in a distributed environment with a presence in 30 countries as well as 20 offices in over 5 continents. Learn more about why it's an exciting time to join Precisely! Overview As a Principal Software Engineer, you will be part of the team that designs and develops cloud applications in the data Integrity domain. You will be deeply involved in designing, developing, and unit testing the applications in our next-generation Data Integrity Suite platform based on k8s. You will work closely with Software engineers, data scientists, and product managers to develop and deploy data-driven solutions that deliver business value. You will contribute to the best practices, standards, and technical roadmap. What You Will Do Lead and contribute to end-to-end product development, with 5 to 7+ years of experience in designing and building scalable, modern cloud-based applications. Take full technical ownership of product features, from design to deployment, ensuring high-quality deliverables. Responsible for unit-level design, implementation, unit and integration testing, and overall adherence to SDLC best practices. Experience with microservices architecture, containerization (Docker/Kubernetes). Drive and participate in technical design discussions, architecture reviews, and ensure robust, scalable, and maintainable solutions. Collaborate effectively with cross-functional teams, including product managers, architects, DevOps, QA, and other engineering teams. Participate in and enforce peer code reviews, ensuring best practices and continuous improvement in code quality and maintainability. Continuously evaluate and adopt emerging technologies and frameworks to enhance system architecture and team productivity. Embrace an Agile development environment, participate in sprints, and adapt to changes as needed. Hands-on experience with technologies such as MongoDB, Kafka, and other modern distributed system components. Strong communication skills and the ability to work in a global team environment. Familiarity with monitoring, observability tools, or performance tuning. What We Are Looking For Education: Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. Experience: 10+ years of experience in developing enterprise-grade software. Demonstrated ability to technically lead product features through the full SDLC — design, development, testing, and deployment. Experience delivering multi-tenant SaaS solutions and working in Agile development environments. Up to 3 years of hands-on experience with cloud stack solutions (AWS, Azure, or GCP preferred). Technical Skills: Strong Object-Oriented Programming (OOP) fundamentals with in-depth knowledge of Java and Spring Boot. Solid understanding of Design Patterns and Architectural Patterns, with proven ability to apply them effectively. Experience with Kafka or any other messaging system (RabbitMQ, etc.). Kafka preferred. Experience with RESTful APIs, and building scalable, modern web applications. Proficiency in databases: SQL, MySQL, MongoDB. Redis is a plus. Experience with CI/CD tools and processes (e.g., Jenkins, Git, Artifactory, JIRA). Familiarity with Git, TDD (Test-Driven Development), and Linux shell commands. Cloud & DevOps: Exposure to cloud-native technologies like Docker, Kubernetes, and microservices architecture. Hands-on experience or understanding of AWS, Azure, or GCP cloud platforms is an added advantage. Soft Skills: Strong problem-solving and debugging skills. Excellent interpersonal and communication skills. Ability to collaborate with diverse, distributed, cross-functional teams. The personal data that you provide as a part of this job application will be handled in accordance with relevant laws. For more information about how Precisely handles the personal data of job applicants, please see the Precisely Global Applicant and Candidate Privacy Notice.
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39817 Jobs | Dublin
Wipro
19388 Jobs | Bengaluru
Accenture in India
15458 Jobs | Dublin 2
EY
14907 Jobs | London
Uplers
11185 Jobs | Ahmedabad
Amazon
10459 Jobs | Seattle,WA
IBM
9256 Jobs | Armonk
Oracle
9226 Jobs | Redwood City
Accenture services Pvt Ltd
7971 Jobs |
Capgemini
7704 Jobs | Paris,France