Jobs
Interviews

10828 Apache Jobs - Page 26

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 years

2 - 3 Lacs

Chandigarh

On-site

Preferred candidate profile We are looking for a Linux Administrator at least 3 years of experience who will be responsible for installation and configuration of webserver and database server. The ideal candidate should have knowledge of deployment of websites. who will be responsible for designing, implementing, and monitoring the infrastructure and knowledge of docker ,CI/CD. 1.In depth knowledge of Linux: RedHat, CentOS, Debian, etc. 2.Solid knowledge of installation and configuration of Webserver(Nginx or Apache), database server(MySQL, PostgreSQL, MongoDB). 3. knowledge about the cloud service like AWS, Azure, Digitalocean. 4. knowledge about the networking like Switches, router, firewall. 5. Knowledge of docker, CI/CD and terraform 6 Knowledge of deployments of website in different language like PHP, NodeJS, python on production server 7. Experience in deployment of Webserver, PHP, Node and Python Job Type: Full-time Pay: ₹20,000.00 - ₹30,000.00 per month Benefits: Food provided Health insurance Life insurance Paid sick time Paid time off

Posted 1 week ago

Apply

5.0 - 8.0 years

0 Lacs

Thiruvananthapuram

On-site

We are hiring !! Position : JAVA DEVELOPERS Experience: 5 to 8 years Mode: Hybrid Location: Technopark , Trivandrum Requirements : Strong programming skills in Java (Java 8 or higher). Proficiency in Spring Boot, REST API development, and microservices architecture. Experience with Apache Kafka for messaging and data streaming. Solid understanding of relational databases, especially PostgreSQL. Familiarity with test-driven development and test automation frameworks. Interested candidates share your resume to drisyasreekumar.srishtis@gmail.com Job Type: Full-time

Posted 1 week ago

Apply

6.0 years

0 Lacs

Hyderābād

On-site

What will you be doing? Develop Real Time Streaming & Batch Data Pipelines. Deliver high-quality data engineering components and services that are robust and scalable. Collaborate and communicate effectively with cross-functional teams to ensure delivery of strong results. Employ methodical approaches to Data Modeling, Data Quality, and Data Governance. Provide guidance on architecture, design, and quality engineering practices to the team. Leverage foundational Data Infrastructure to support analytics, BI, and visualization layers. Work closely with data scientists on feature engineering, model training frameworks, and model deployments at scale. What are we looking for? BS/MS in Computer Science or related field, or an equivalent combination of education and experience. A minimum of 6 years of experience in software engineering, with hands-on experience in building data pipelines and big data technologies. Proficiency with Big Data technologies such as Apache Spark, Apache Iceberg, Amazon Redshift, Athena, EMR, and other AWS services (S3, Lambda, EMR). Expertise in at least one programming language: Python, Java, or Scala. Extensive experience in designing and building data models, integrating data from various sources, building ETL/ELT and data-flow pipelines, and supporting all parts of the data platform. Expert-level SQL programming knowledge and experience. Experience with any enterprise reporting and/or data visualization tools like Strategy, Cognos, Tableau, Looker, PowerBI, Superset, QlikView etc. Strong data analysis skills, capable of making data-driven arguments and effective visualizations. Energetic, enthusiastic, and detail-oriented. Bonus Points Experience in e-commerce/retail domain. Knowledge on StarRocks. Knowledge in Web Services, API integration, and data exchanges with third parties. Familiarity with basic statistical analysis and machine learning concepts. A passion for producing high-quality analytics deliverables.

Posted 1 week ago

Apply

5.0 years

1 - 9 Lacs

Hyderābād

On-site

Be an integral part of an agile team that's constantly pushing the envelope to enhance, build, and deliver top-notch technology products. As a Senior Lead Software Engineer at JPMorgan Chase within the Consumer & Community Banking Team, you are an integral part of an agile team that works to enhance, build, and deliver trusted market-leading technology products in a secure, stable, and scalable way. Drive significant business impact through your capabilities and contributions, and apply deep technical expertise and problem-solving methodologies to tackle a diverse array of challenges that span multiple technologies and applications. Job responsibilities Regularly provides technical guidance and direction to support the business and its technical teams, contractors, and vendors Develops secure and high-quality production code, and reviews and debugs code written by others Drives decisions that influence the product design, application functionality, and technical operations and processes Serves as a function-wide subject matter expert in one or more areas of focus Actively contributes to the engineering community as an advocate of firmwide frameworks, tools, and practices of the Software Development Life Cycle Influences peers and project decision-makers to consider the use and application of leading-edge technologies Adds to the team culture of diversity, opportunity, inclusion, and respect Required qualifications, capabilities, and skills Formal training or certification on software development concepts and 5+ years applied experience hands-on end-to-end experience delivering complex solutions at scale, testing and operational stability Experience in Mainframe programming languages/components Cobol, JCL, VSAM, DB2, IMS, CICS. In addition, experience in one or more general purpose programming is also expected – AWS Cloud, Core Java, Spring frameworks, REST API, Micro services, and other web technologies etc Experience with the following development and build tools (or similar): IntelliJ/Eclipse, Maven, BitBucket/Git/Gitflow, JMeter//Blazemeter, Spring Boot Experience in Unix Shell Scripts, API Gateways, Apache Kafka, NoSQL Experience in all aspects of the software development process including, requirements, designing, coding, unit testing, quality assurance, and deployment Experience with the agile software development methodologies such as SCRUM, for quick turnaround time Experience in Computer Science, Computer Engineering, Mathematics, or a related technical field

Posted 1 week ago

Apply

0 years

2 - 2 Lacs

Hyderābād

On-site

Role Summary & Role Description: Technical Manager with specific Oracle, PLSQL and design, develop, and optimize data workflows on the Databricks platform. The ideal candidate will have deep expertise in Apache Spark, Pyspark, Python, job orchestration, and CI/CD integration to support scalable data engineering and analytics solutions. Analyzes, designs, develops and maintains software applications to support business units. Expected to spend 80% of the time on hands-on development, design and architecture and remaining 20% on guiding the team on technology and removing other impediments Capital Markets Projects experience preferred Provides advanced technical expertise in analyzing, designing, estimating, and developing software applications to project schedule. Oversees systems design and implementation of most complex design components. Creates project plans and deliverables and monitors task deadlines. Oversees, maintains and supports existing software applications. Provides subject matter expertise in reviewing, analyzing, and resolving complex issues. Designs and executes end to end system tests of new installations and/or software prior to release to minimize failures and impact to business and end users. Responsible for resolution, communication, and escalation of critical technical issues. Prepares user and systems documentation as needed. Identifies and recommends Industry best practices. Serves as a mentor to junior staff. Acts as a technical lead/mentor for developers in day to day and overall project areas. Ability to lead a team of agile developers. Worked in a complex deadline driven projects with minimal supervision. Ability to architect/design/develop with minimum requirements by effectively coordinating activities between business analysts, scrum leads, developers and managers. Ability to provide agile status notes on day to day project tasks. Technical Skills Design and implement robust ETL pipelines using Databricks notebooks and workflows. Proficiency in Python, Scala, Apache Spark, SQL, and Spark DataFrames. Experience with job orchestration tools and scheduling frameworks. Optimize Spark jobs for performance and cost-efficiency. Develop and manage job orchestration strategies using Databricks Jobs and Workflows. Familiarity with CI/CD practices and tools. Monitor and troubleshoot production jobs, ensuring reliability and data quality. Implement security and governance best practices including access control and encryption. Strong Practical experience using Scrum, Agile modelling and adaptive software development. Ability to understand and grasp the big picture of system components. Experience building environment and architecture and design guides and architecture and application blueprints. Strong understanding of data modeling, warehousing, and performance tuning. Excellent problem-solving and communication skills. Core/Must have skills: Oracle, SQL, PLSQL, Python, Scala, Apache Spark, Spark Streaming, CI CD pipeline, AWS cloud experience Good to have skills: Airflow Work Schedule: 12 PM IST to 9 PM (IST) About State Street: What we do. State Street is one of the largest custodian banks, asset managers and asset intelligence companies in the world. From technology to product innovation, we’re making our mark on the financial services industry. For more than two centuries, we’ve been helping our clients safeguard and steward the investments of millions of people. We provide investment servicing, data & analytics, investment research & trading and investment management to institutional clients. Work, Live and Grow. We make all efforts to create a great work environment. Our benefits packages are competitive and comprehensive. Details vary by location, but you may expect generous medical care, insurance and savings plans, among other perks. You’ll have access to flexible Work Programs to help you match your needs. And our wealth of development programs and educational support will help you reach your full potential. Inclusion, Diversity and Social Responsibility. We truly believe our employees’ diverse backgrounds, experiences and perspectives are a powerful contributor to creating an inclusive environment where everyone can thrive and reach their maximum potential while adding value to both our organization and our clients. We warmly welcome candidates of diverse origin, background, ability, age, sexual orientation, gender identity and personality. Another fundamental value at State Street is active engagement with our communities around the world, both as a partner and a leader. You will have tools to help balance your professional and personal life, paid volunteer days, matching gift programs and access to employee networks that help you stay connected to what matters to you. State Street is an equal opportunity and affirmative action employer. Discover more at StateStreet.com/careers

Posted 1 week ago

Apply

0 years

0 Lacs

Hyderābād

Remote

Req ID: 335295 NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a Snowflake Engineer - Digital Solution Consultant Sr. Analyst to join our team in Hyderabad, Telangana (IN-TG), India (IN). Experience with other cloud data warehousing solutions. Knowledge of big data technologies (e.g., Apache Spark, Hadoop). Experience with CI/CD pipelines and DevOps practices. Familiarity with data visualization tools. About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at us.nttdata.com Whenever possible, we hire locally to NTT DATA offices or client sites. This ensures we can provide timely and effective support tailored to each client's needs. While many positions offer remote or hybrid work options, these arrangements are subject to change based on client requirements. For employees near an NTT DATA office or client site, in-office attendance may be required for meetings or events, depending on business needs. At NTT DATA, we are committed to staying flexible and meeting the evolving needs of both our clients and employees. NTT DATA recruiters will never ask for payment or banking information and will only use @nttdata.com and @talent.nttdataservices.com email addresses. If you are requested to provide payment or disclose banking information, please submit a contact us form, https://us.nttdata.com/en/contact-us. NTT DATA endeavors to make https://us.nttdata.com accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at https://us.nttdata.com/en/contact-us. This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click here. If you'd like more information on your EEO rights under the law, please click here. For Pay Transparency information, please click here.

Posted 1 week ago

Apply

10.0 - 12.0 years

9 - 10 Lacs

Hyderābād

On-site

Overview: PepsiCo Data BI & Integration Platforms is seeking an experienced highly skilled professional for managing and optimizing Apache and Oracle WebLogic server environments (on-premises and AWS/Azure cloud) ensuring high availability, performance, and security of PepsiCo’s Global enterprise applications. The ideal candidate will have extensive hands-on experience and deep expertise in Apache and Oracle WebLogic administration, troubleshooting, and advanced configuration; deep hands-on experience with cloud Infrastructure as Code (IaC), cloud network design, cloud security principles, cloud modernization and automation. Responsibilities: Leadership and Guidance Manage and mentor a team of cloud platform infrastructure SMEs, providing technical leadership and direction. Modernization Migration and modernization of Apache/WebLogic to Azure/AWS Patching and Upgrades Troubleshooting and Problem Resolution Identifying and resolving system and application issues, including performance degradation, connectivity problems, and security breaches. Participating in project planning and change management, including root cause analysis for issues. On-Call Support: Providing on-call support for production environments. Documentation Creating and maintaining documentation of configuration changes, system processes, and troubleshooting procedures. Collaboration Working closely with development, operations, and other teams to support application lifecycle management and ensure smooth operation. High Availability, Business Continuity and Disaster Recovery Configuring and maintaining high availability and disaster recovery solutions, including clustering and failover mechanisms & testing. Apache/WebLogic Installation and Configuration WebLogic – Installation, configuration, and maintenance of WebLogic Server instances, including domains, clusters, and authentication providers. WebLogic – Integrating WebLogic with other systems, such as web servers (Apache, etc.), messaging systems, and databases. Apache – Installation, configuration, and maintenance of Apache web servers and Tomcat infrastructure. Application Deployment WebLogic – Deploying and managing applications (, including WAR, EAR, and JAR files) on the WebLogic Server, ensuring proper configuration and integration. Apache – Deploying and configuring web applications for serving static content and routing requests. Apache/WebLogic – Performing capacity planning and forecasting for the application and web infrastructure. Performance Tuning and Optimization WebLogic – Optimizing the performance of WebLogic Server and applications through techniques like heap size configuration, thread dump analysis, and other performance tuning methods. Apache/WebLogic – Monitoring server performance, identifying bottlenecks, and implementing optimizations to improve efficiency and responsiveness. Security Administration WebLogic – Implementing and managing security configurations/realms, including SSL/TLS, user authentication, and access control - users, groups, roles, and policies. Apache – Managing security and access controls for the Apache environment and implementing secure coding practices Automation and Scripting Developing and implementing scripts (e.g., WLST) to automate routine tasks and manage the WebLogic/Apache environment, including integration with Elastic, Splunk and ServiceNow. Developing and implementing automation strategies, including CI/CD pipelines, and analyzing processes for improvements. Leverage Oracle Web Management Pack for automation. Monitoring and Alerting WebLogic – Monitoring server health, performance metrics, and logs, and tuning WebLogic configurations for optimal performance. WebLogic – Utilizing monitoring tools (e.g., Nagios, Zabbix) to track server health and performance, and troubleshooting issues and outages. Apache - Monitoring the Apache environment to resolve issues and tracking website performance through analytics. Cloud Infrastructure & Automation Implement cloud infrastructure policies, standards, and best practices, ensuring cloud environment adherence to security and regulatory requirements. Design, deploy and optimize cloud-based infrastructure using Azure/AWS services that meet the performance, availability, scalability, and reliability needs of our applications and services. Drive troubleshooting of cloud infrastructure issues, ensuring timely resolution and root cause analysis by partnering with global cloud center of excellence & enterprise application teams, and PepsiCo premium cloud partners (Microsoft, AWS, Apache & Oracle). Establish and maintain effective communication and collaboration with internal and external stakeholders, including business leaders, developers, customers, and vendors. Develop Infrastructure as Code (IaC) to automate provisioning and management of cloud resources. Write and maintain scripts for automation and deployment using PowerShell, Python, or Azure/AWS CLI. Work with stakeholders to document architectures, configurations, and best practices. Knowledge of cloud security principles around data protection, identity and access Management (IAM), compliance and regulatory, threat detection and prevention, disaster recovery and business continuity. Qualifications: A bachelor’s degree in computer science or a related field, or equivalent experience. 10 to 12 years of experience in Apache/WebLogic server environment, including architecture, operations and security, with at least 6 to 8 years of experience leading cloud migration/modernization. Extensive hands-on experience on WebLogic: server architecture deployment (deployment plans/descriptors) administration Java and J2EE technologies JMS and messaging bridges relational databases (e.g., Oracle, Exadata) WebLogic Diagnostics Framework (WLDF), Oracle Web Management Packs MBeans and JMX WLST, shell scripting integration with cloud platforms (AWS, Azure) containerization using Docker and Kubernetes Extensive hands-on experience on Apache: web server administration including IIS and Tomcat configuring Apache to serve static contents using Alias, Directory Directives and Caching routing dynamic requests using URL Rewrite (simple redirect and complex URL manipulation) and Virtual Hosts performance tuning modules, operating system settings CDN integration with cloud platforms (AWS, Azure) containerization using Docker and Kubernetes Extensive hands-on experience with Windows and Linux administration skills. Extensive hands-on experience with web servers (e.g., Apache, Nginx), security realm configuration including LDAP and custom security providers. Extensive hands-on experience leading cloud migration and modernization with experience/understanding in: AWS Elastic Beanstalk, Amazon EC2, ECS/EKS, Docker, AWS Application Migration Service, microservice refactoring. Azure WebLogic server, Virtual Machines, AKS Oracle certification in WebLogic, Azure/AWS is preferred. Extensive hands-on experience implementing high availability and disaster recovery for Apache/WebLogic or with other cloud platform technologies. Deep knowledge of cloud architecture, design, and deployment principles and practices, including microservices, serverless, containers, and DevOps. Deep expertise in Azure/AWS networking and security fundamentals, including network endpoints & network security groups, firewalls, external/internal DNS, F5 load balancers, virtual networks and subnets. Proficient in scripting and automation tools, such as Bash, PERL, PowerShell, Python, Terraform, and Ansible. Excellent problem-solving, analytical, and communication skills, with the ability to explain complex technical concepts to non-technical audiences. Strong self-organization, time management and prioritization skills An elevated level of attention to detail, excellent follow through, and reliability Strong collaboration, teamwork and relationship building skills across multiple levels and functions in the organization Ability to listen, establish rapport, and credibility as a strategic partner vertically within the business unit or function, as well as with leadership and functional teams Strategic thinker focused on business value results that utilize technical solutions Strong communication skills in writing, speaking, and presenting Capable to work effectively in a multi-tasking environment. Fluent in English language.

Posted 1 week ago

Apply

15.0 years

25 - 35 Lacs

India

On-site

This is a Full-time opportunity with 100% Onsite work mode. Candidate MUST be local to Hyderabad area. Available to join immediately or on a short notice. In-Person interview only. We are seeking a highly experienced Java Architect with deep expertise in Event-Driven Architecture (EDA) and Apache Kafka to design and lead the implementation of scalable, high-performance distributed systems. You will play a key role in architecting solutions that handle high-throughput event streaming, ensuring system resiliency, scalability, and performance in real-time data environments. Required Skills & Qualifications: 15+ years of professional experience in Java/J2EE development. 5+ years in architectural roles designing distributed, scalable systems. Deep understanding of Apache Kafka, including topic design, producer/consumer optimization, schema registry, and message durability. Experience with Spring Boot, Spring Cloud, or similar microservices frameworks. Strong knowledge of Event-Driven Architecture, including concepts like event sourcing, stream processing, and domain-driven design (DDD). Experience with Kafka Streams, Kafka Connect, and KSQL is a plus. Familiarity with other messaging systems like RabbitMQ, Pulsar, or AWS Kinesis is beneficial. Hands-on with containerization and orchestration tools (Docker, Kubernetes). Solid understanding of REST APIs, JSON. Strong communication and leadership skills. Ability to mentor and guide teams in design and implementation. Problem-solving mindset with the ability to work in fast-paced environments. Strong stakeholder management and collaboration capabilities. Job Type: Full-time Pay: ₹2,500,000.00 - ₹3,500,000.00 per year Work Location: In person

Posted 1 week ago

Apply

11.0 years

3 - 4 Lacs

Hyderābād

On-site

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. At Optum Insight, we are at the forefront of modernizing healthcare. We are seeking a visionary Application Architect to own the technical design and strategy for a pivotal component at the heart of Payer solutions business. This is a unique opportunity to modernize and build a mission-critical, event-driven system from the ground up. You will be responsible for designing a highly scalable, resilient, and performant service that processes real-time data streams for millions of members. If you are passionate about building robust distributed systems and want to make a tangible impact in healthcare, this role is for you. Primary Responsibilities: Architectural Ownership: Own and define the end-to-end technical architecture for the Component/Service, ensuring it aligns with our Domain-Driven Design (DDD) and event-driven principles System Design: Design and document data models, API contracts (REST), and event schemas. Create and maintain architectural diagrams including state machines, component diagrams, and sequence diagrams to guide the development team Data-Intensive Application Design: Architect a solution that efficiently consumes, processes, and evaluates high-volume data streams from Apache Kafka, originating from systems like our Common Data Intake service and other domain services Database Strategy: Lead the design and implementation of our database strategy using MongoDB Atlas on Azure. This includes schema design, indexing strategies, and leveraging advanced features like the Aggregation Framework for real-time analytics and Atlas Search for fuzzy matching capabilities Technical Leadership & Mentorship: Guide and mentor a team of talented engineers on best practices for building scalable microservices using Java, Spring Boot, and Kafka. Provide hands-on guidance where needed Cross-Functional Collaboration: Work closely with other architects, product owners, and business stakeholders to translate complex business requirements into a robust, secure, and maintainable technical solution Ensure Non-Functional Requirements: Design for scalability, high availability, data security (HIPAA compliance), and performance. Define SLOs/SLIs and ensure the system is instrumented for effective monitoring and alerting Legacy Integration: Understand the existing data landscape, including data stores like Hive and processing jobs in Spark, to ensure seamless integration and migration paths Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: 11+ years of professional software engineering experience with a focus on building enterprise-scale, backend systems 3+ years in a technical leadership or architect role with a proven track record of designing and delivering complex software projects Expert Database Design Skills: Solid hands-on experience with NoSQL document databases, particularly MongoDB, including schema design and performance tuning Solid foundation in relational databases (SQL) and understanding of when to use each Data Processing Knowledge: Experience working within a modern data ecosystem that includes technologies like Apache Spark and Hive API Design Mastery: Extensive experience designing, developing, and documenting secure, scalable REST APIs Microsoft Azure Knowledge: Experience working with MS Azure to provide cloud services for infrastructure, computing and database Event-Driven Architecture: Proven experience designing and implementing solutions using Apache Kafka for high-throughput, real-time data streaming Deep Expertise in the Java Ecosystem: Mastery of Java and the Spring Boot framework for building RESTful APIs and microservices Preferred Qualifications: Search Engine Experience: Hands-on experience with Elasticsearch or, even better, MongoDB Atlas Search for implementing complex search and fuzzy matching logic Domain-Driven Design (DDD): Practical experience applying DDD principles (Bounded Contexts, Aggregates, Events) to build maintainable and business-aligned software Cloud Platform Expertise: Experience architecting and deploying applications on a major cloud provider, with a solid preference for Microsoft Azure CI/CD & DevOps: Experience working in a mature DevOps environment with CI/CD pipelines (e.g., Jenkins, Azure DevOps), containerization (Docker, Kubernetes), and Infrastructure as Code (Terraform) System Observability: Experience with monitoring and observability stacks such as Prometheus, Grafana, or Dynatrace Healthcare Domain Knowledge: Familiarity with the healthcare industry, particularly related to risk adjustment (HCCs), quality measures (HEDIS), and PHI/HIPAA compliance Experience coordinating and managing distributed teams of varying size and skill set and working with or as part of a matrixed organization Knowledge of Terraform Knowledge of AZURE: Virtual Network (VNet), Subnets, VPN Gateway / ExpressRoute, Azure SQL Database / Azure Database for PostgreSQL/MySQL, Azure Virtual Machines (VMs), Azure Kubernetes Service (AKS) / Azure Container Instances (ACI), Azure Monitor / Log Analytics, Azure Blob Storage, Azure Functions, Azure Event Grid / Azure Notification Hubs, Azure Service Bus / Azure Queue Storage, Azure Data Box / Azure File Sync / SFTP on Azure Blob Storage Knowledge of Programming Languages: Javascript, Java (Spring Boot), Node.js, Angular Solid understanding and experience in delivery software using the agile SAFe framework Proven planning skills and the ability to work with limited supervision Proven solid verbal and written communication skills including presentation skills Proven solid conflict resolution and negotiation skills Proven ability to work well in a matrixed cross-functional environment and a builder of professional relationships At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone–of every race, gender, sexuality, age, location and income–deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.

Posted 1 week ago

Apply

4.0 years

0 Lacs

Hyderābād

Remote

Software Engineer II Hyderabad, Telangana, India Date posted Jul 28, 2025 Job number 1851616 Work site Up to 50% work from home Travel 0-25 % Role type Individual Contributor Profession Software Engineering Discipline Software Engineering Employment type Full-Time Overview The Purview team is dedicated to protecting and governing the enterprise digital estate on a global scale. Our mission involves developing cloud solutions that offer premium features such as security, compliance, data governance, data loss prevention and insider risk management. These solutions are fully integrated across Office 365 services and clients, as well as Windows. We create global-scale services to transport, store, secure, and manage some of the most sensitive data on the planet, leveraging Azure, Exchange, and other cloud platforms, along with Office applications like Outlook. The IDC arm of our team is expanding significantly and seeks talented, highly motivated engineers. This is an excellent opportunity for those looking to build expertise in cloud distributed systems, security, and compliance. Our team will develop cloud solutions that meet the demands of a vast user base, utilizing state-of-the-art technologies to deliver comprehensive protection. Office 365, the industry leader in hosted productivity suites, is the fastest-growing business at Microsoft, with over 100 million seats hosted in multiple data centers worldwide. The Purview Engineering team provides leadership, direction, and accountability for application architecture, cloud design, infrastructure development, and end-to-end implementation. You will independently determine and develop architectural approaches and infrastructure solutions, conduct business reviews, and operate our production services. Strong collaboration skills are essential to work closely with other engineering teams, ensuring our services and systems are highly stable, performant, and meet the expectations of both internal and external customers and users. Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond. Qualifications Qualifications - Required: Solid understanding of Object-Oriented Programming (OOP) and common Design Patterns. Minimum of 4+ years of software development experience, with proficiency in C#, Java, or scala. Hands-on experience with cloud platforms such as Azure, AWS, or Google Cloud; experience with Azure Services is a plus. Familiarity with DevOps practices, CI/CD pipelines, and agile methodologies. Strong skills in distributed systems and data processing. Excellent communication and collaboration abilities, with the capacity to handle ambiguity and prioritize effectively. A BS or MS degree in Computer Science or Engineering, or equivalent work experience. Qualifications - Other Requirements: Ability to meet Microsoft, customer and/or government security screening requirements are required for this role. These requirements include, but are not limited to the following specialized security screenings: Microsoft Cloud Background Check: This position will be required to pass the Microsoft background and Microsoft Cloud background check upon hire/transfer and every two years thereafter. Responsibilities Build cloud-scale services that process and analyze massive volumes of organizational signals in real time. Harness the power of Apache Spark for high-performance data processing and scalable pipelines. machine learning to uncover subtle patterns and anomalies that signal insider threats. Craft intelligent user experiences using React and AI-driven insights to help security analysts act with confidence. Work with a modern tech stack and contribute to a product that’s mission-critical for some of the world’s largest organizations. Collaborate across disciplines—from data science to UX to cloud infrastructure—in a fast-paced, high-impact environment. Design and deliver end-to-end features including system architecture, coding, deployment, scalability, performance, and quality. Develop large-scale distributed software services and solutions that are modular, secure, reliable, diagnosable, and reusable. Conduct investigations and drive investments in complex technical areas to improve systems and services. Ensure engineering excellence by writing effective code, unit tests, debugging, code reviews, and building CI/CD pipelines. Troubleshoot and optimize Live Site operations, focusing on automation, reliability, and monitoring. Benefits/perks listed below may vary depending on the nature of your employment with Microsoft and the country where you work.  Industry leading healthcare  Educational resources  Discounts on products and services  Savings and investments  Maternity and paternity leave  Generous time away  Giving programs  Opportunities to network and connect Microsoft is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations.

Posted 1 week ago

Apply

4.0 years

0 Lacs

Hyderābād

On-site

We are looking for a highly motivated, analytical and results-driven back-end developer who is comfortable with backend programming language Python. To succeed as a backend developer, you should be focused on building a better, more efficient program and creating a better end-user experience. You should be knowledgeable, collaborative, and motivated. Responsibilities : Designing and developing APIs. Collaborating with the front-end developers and other team members to establish objectives and design more functional, cohesive codes to enhance the user experience. Maintain code integrity and organization. Escalate risks and critical issues to program leadership. Deliver developed and tested application releases on time and with high quality. Staying abreast of developments in web applications and programming languages. Ability to work independently or with a group. Skill Set (Good to have): Appreciation for clean and well documented code. Proficiency with Git, agile methodology and familiarity with project management tools such as Jira. Experience with IAM, debugging, multi-threading, SSL/TLS/HTTPS, Redis, Apache, Nginx, RabbitMQ, Docker and Web Sockets. Job Type: Full-time Location Type: In-person Schedule: Day shift Monday to Friday Morning shift Education: Bachelor's (Preferred) Experience: Python: 4 years (Preferred) Django: 3 years (Preferred) Work Location: In person

Posted 1 week ago

Apply

12.0 years

34 - 45 Lacs

India

Remote

**Need to be Databricks SME *** Location - offshore ( Anywhere from India - Remote ) - Need to work in EST Time (US shift) Need 12+ Years of experience. 5 Must Haves: 1. Data Expertise -- worked in Azure Data Bricks/Pipeline/ Shut Down Clusters--2 or more years' experience 2. Unity Catalog migration -- well versed--done tera form scripting in Dev Ops--coding & understand the code--understanding the logics of the behind the scenes--automate functionality 3. Tera Form Expertise -- code building --- 3 or more years 4. Understanding data mesh architecture -- decoupling applications -- ability to have things run in Parallel -- clear understanding -- 2 plus years of experience Microsoft Azure Cloud Platform 5. Great problem Solver Key Responsibilities: Architect, configure, & optimize Databricks Pipelines for large-scale data processing within an Azure Data Lakehouse environment. Set up & manage Azure infrastructure components including Databricks Workspaces, Azure Containers (AKS/ACI), Storage Accounts, & Networking. Design & implement a monitoring & observability framework using tools like Azure Monitor, Log Analytics, & Prometheus / Grafana. Collaborate with platform & data engineering teams to enable microservices-based architecture for scalable & modular data solutions. Drive automation & CI / CD practices using Terraform, ARM templates, & GitHub Actions/Azure DevOps. Required Skills & Experience: Strong hands - on experience with Azure Databricks, Delta Lake, & Apache Spark. Deep understanding of Azure services: Resource Manager, AKS, ACR, Key Vault, & Networking. Proven experience in microservices architecture & container orchestration. Expertise in infrastructure-as-code, scripting (Python, Bash), & DevOps tooling. Familiarity with data governance, security, & cost optimization in cloud environments. Bonus: Experience with event - driven architectures (Kafka / Event Grid). Knowledge of data mesh principles & distributed data ownership. Interview: Two rounds of interviews (1st with manager & 2nd with the team) Job Type: Full-time Pay: ₹3,400,000.00 - ₹4,500,000.00 per year Schedule: US shift

Posted 1 week ago

Apply

0 years

0 Lacs

Coimbatore, Tamil Nadu, India

On-site

Proficiency with server-side languages Java, J2EE and Proficiency with server-side languages Java, J2EE Experience in Spring Boot, Spring Cloud, Microservices, Apache Camel, HTTP, REST APIs on AJAX, data payloads of XML, JSON/JSONP Experience with web application development and Proficiency with JavaScript, JQuery Experience with Scrum/Agile development methodologies Experience Mongo DB and MySQL Experience in Ecommerce, Project Management, Implementation A day in the life of an Infosys Equinox employee: As part of the Infosys Equinox delivery team, your primary role would be to ensure effective Design, Development, Validation and Support activities, to assure that our clients are satisfied with the high levels of service in the technology domain. You will gather the requirements and specifications to understand the client requirements in a detailed manner and translate the same into system requirements. You will play a key role in the overall estimation of work requirements to provide the right information on project estimations to Technology Leads and Project Managers. You would be a key contributor to building efficient programs/ systems and if you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! If you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! Infosys Equinox is a human-centric digital commerce platform that helps brands provide an omnichannel and memorable shopping experience to their customers. With a future-ready architecture and integrated commerce ecosystem, Infosys Equinox provides an end-to-end commerce platform covering all facets of an enterprise’s e-commerce needs. Knowledge of more than one technology Basics of Architecture and Design fundamentals Knowledge of Testing tools Knowledge of agile methodologies Understanding of Project life cycle activities on development and maintenance projects Understanding of one or more Estimation methodologies, Knowledge of Quality processes Basics of business domain to understand the business requirements Analytical abilities, Strong Technical Skills, Good communication skills Good understanding of the technology and domain Ability to demonstrate a sound understanding of software quality assurance principles, SOLID design principles and modelling methods Awareness of latest technologies and trends Team Management, Excellent problem solving, analytical and debugging skills Technical strength in mobile and web technologies Team Management Communication Problem-solving & Decision-making Technical strength in mobile and web technologies Strong front-end development

Posted 1 week ago

Apply

0 years

0 Lacs

Delhi

On-site

Department Platform Engineering Job posted on Jul 29, 2025 Employment type FTE Job Summary: We are looking for a Deployment Engineer with strong hands-on experience in Linux systems, virtualization platforms, cloud orchestration technologies, and infrastructure automation. This role involves end-to-end deployment and configuration of both private and public cloud environments, ensuring robust, scalable, and secure infrastructure. The ideal candidate will also bring scripting proficiency and automation skills to streamline cloud deployments and infrastructure operations. Key Responsibilities: Deploy, configure, and maintain Linux operating systems (Ubuntu, RHEL, CentOS) on physical and virtual platforms. Install and manage hypervisors including KVM, VMware ESXi, and XenServer. Lead and implement private and public cloud infrastructure using tools like OpenStack, Apache CloudStack, or other orchestration platforms. Write and maintain automation scripts using Bash, Python, or similar scripting languages for deployment and operational tasks. Use Ansible and Terraform to automate infrastructure provisioning, configuration, and management. Implement and manage monitoring and observability tools such as Zabbix, Prometheus, and Grafana. Work with physical infrastructure teams to set up and troubleshoot bare-metal hardware, storage systems, and network configurations. Document architecture designs, implementation plans, SOPs, and deployment workflows. Ensure security best practices, system hardening, and compliance with infrastructure policies. Required Skills and Experience: Strong command over Linux system administration with proven troubleshooting capabilities. Experience with virtualization platforms: KVM, VMware, Xen. Hands-on experience in deploying any cloud orchestration platforms like OpenStack or CloudStack. Proficiency in scripting languages – Bash, Python, or similar. Deep understanding of automation tools like Ansible and Terraform. Knowledge of monitoring tools – Zabbix, Prometheus, Grafana. Solid understanding of storage technologies, RAID configurations, and volume management. Familiarity with hardware components, server installations, and physical infrastructure planning. Basic understanding of networking concepts, including IP addressing, routing, VLANs, and firewall basics. Preferred Qualifications : Certifications such as RHCE, VCP, LFCS, or similar. Exposure to container orchestration (Docker, Kubernetes) is a plus. Experience with cloud-native and hybrid infrastructure deployments. Strong analytical thinking and excellent communication/documentation skills. Ability to work independently in high-paced and mission-critical environments

Posted 1 week ago

Apply

175.0 years

0 Lacs

Gurgaon

On-site

At American Express, our culture is built on a 175-year history of innovation, shared values and Leadership Behaviors, and an unwavering commitment to back our customers, communities, and colleagues. As part of Team Amex, you'll experience this powerful backing with comprehensive support for your holistic well-being and many opportunities to learn new skills, develop as a leader, and grow your career. Here, your voice and ideas matter, your work makes an impact, and together, you will help us define the future of American Express. How will you make an impact in this role? Expertise with handling large volumes of data coming from many different disparate systems Expertise with Core Java , multithreading , backend processing , transforming large data volumes Working knowledge of Apache Flink , Apache Airflow , Apache Beam, open source data processing platforms Working knowledge of cloud platforms like GCP. Working knowledge of databases and performance tuning for complex big data scenarios - Singlestore DB and In Memory Processing Cloud Deployments , CI/CD and Platform Resiliency Good experience with Mvel Excellent communication skills , collaboration mindset and ability to work through unknowns Work with key stakeholders to drive data solutions that align to strategic roadmaps, prioritized initiatives and strategic Technology directions. Own accountability for all quality aspects and metrics of product portfolio, including system performance, platform availability, operational efficiency, risk management, information security, data management and cost effectiveness. Minimum Qualifications: Bachelor’s degree in computer science, Computer Science Engineering, or related field is required. 3+ years of large-scale technology engineering and formal management in a complex environment and/or comparable experience. To be successful in this role you will need to be good in Java, Flink, SQ, KafkaL & GCP Successful engineering and deployment of enterprise-grade technology products in an Agile environment. Large scale software product engineering experience with contemporary tools and delivery methods (i.e. DevOps, CD/CI, Agile, etc.). 3+ years' experience in a hands-on engineering in Java and data/distributed eco-system. Ability to see the big picture with attention given to critical details. Preferred Qualifications: Knowledge on Kafka, Spark Finance domain knowledge We back you with benefits that support your holistic well-being so you can be and deliver your best. This means caring for you and your loved ones' physical, financial, and mental health, as well as providing the flexibility you need to thrive personally and professionally: Competitive base salaries Bonus incentives Support for financial-well-being and retirement Comprehensive medical, dental, vision, life insurance, and disability benefits (depending on location) Flexible working model with hybrid, onsite or virtual arrangements depending on role and business need Generous paid parental leave policies (depending on your location) Free access to global on-site wellness centers staffed with nurses and doctors (depending on location) Free and confidential counseling support through our Healthy Minds program Career development and training opportunities American Express is an equal opportunity employer and makes employment decisions without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran status, disability status, age, or any other status protected by law. Offer of employment with American Express is conditioned upon the successful completion of a background verification check, subject to applicable laws and regulations.

Posted 1 week ago

Apply

5.0 years

5 - 10 Lacs

Gurgaon

On-site

Manager EXL/M/1435552 ServicesGurgaon Posted On 28 Jul 2025 End Date 11 Sep 2025 Required Experience 5 - 10 Years Basic Section Number Of Positions 1 Band C1 Band Name Manager Cost Code D013514 Campus/Non Campus NON CAMPUS Employment Type Permanent Requisition Type New Max CTC 1500000.0000 - 2500000.0000 Complexity Level Not Applicable Work Type Hybrid – Working Partly From Home And Partly From Office Organisational Group Analytics Sub Group Analytics - UK & Europe Organization Services LOB Analytics - UK & Europe SBU Analytics Country India City Gurgaon Center EXL - Gurgaon Center 38 Skills Skill JAVA HTML Minimum Qualification B.COM Certification No data available Job Description Job Description: Senior Full Stack Developer Position: Senior Full Stack Developer Location: Gurugram Relevant Experience Required: 8+ years Employment Type: Full-time About the Role We are looking for a Senior Full Stack Developer who can build end-to-end web applications with strong expertise in both front-end and back-end development. The role involves working with Django, Node.js, React, and modern database systems (SQL, NoSQL, and Vector Databases), while leveraging real-time data streaming, AI-powered integrations, and cloud-native deployments. The ideal candidate is a hands-on technologist with a passion for modern UI/UX, scalability, and performance optimization. Key Responsibilities Front-End Development Build responsive and user-friendly interfaces using HTML5, CSS3, JavaScript, and React. Implement modern UI frameworks such as Next.js, Tailwind CSS, Bootstrap, or Material-UI. Create interactive charts and dashboards with D3.js, Recharts, Highcharts, or Plotly. Ensure cross-browser compatibility and optimize for performance and accessibility. Collaborate with designers to translate wireframes and prototypes into functional components. Back-End Development Develop RESTful & GraphQL APIs with Django/DRF and Node.js/Express. Design and implement microservices & event-driven architectures. Optimize server performance and ensure secure API integrations. Database & Data Management Work with structured (PostgreSQL, MySQL) and unstructured databases (MongoDB, Cassandra, DynamoDB). Integrate and manage Vector Databases (Pinecone, Milvus, Weaviate, Chroma) for AI-powered search and recommendations. Implement sharding, clustering, caching, and replication strategies for scalability. Manage both transactional and analytical workloads efficiently. Real-Time Processing & Visualization Implement real-time data streaming with Apache Kafka, Pulsar, or Redis Streams. Build live features (e.g., notifications, chat, analytics) using WebSockets & Server-Sent Events (SSE). Visualize large-scale data in real time for dashboards and BI applications. DevOps & Deployment Deploy applications on cloud platforms (AWS, Azure, GCP). Use Docker, Kubernetes, Helm, and Terraform for scalable deployments. Maintain CI/CD pipelines with GitHub Actions, Jenkins, or GitLab CI. Monitor, log, and ensure high availability with Prometheus, Grafana, ELK/EFK stack. Good to have AI & Advanced Capabilities Integrate state-of-the-art AI/ML models for personalization, recommendations, and semantic search. Implement Retrieval-Augmented Generation (RAG) pipelines with embeddings. Work on multimodal data processing (text, image, and video). Preferred Skills & Qualifications Core Stack Front-End: HTML5, CSS3, JavaScript, TypeScript, React, Next.js, Tailwind CSS/Bootstrap/Material-UI Back-End: Python (Django/DRF), Node.js/Express Databases: PostgreSQL, MySQL, MongoDB, Cassandra, DynamoDB, Vector Databases (Pinecone, Milvus, Weaviate, Chroma) APIs: REST, GraphQL, gRPC State-of-the-Art & Advanced Tools Streaming: Apache Kafka, Apache Pulsar, Redis Streams Visualization: D3.js, Highcharts, Plotly, Deck.gl Deployment: Docker, Kubernetes, Helm, Terraform, ArgoCD Cloud: AWS Lambda, Azure Functions, Google Cloud Run Monitoring: Prometheus, Grafana, OpenTelemetry Workflow Workflow Type Back Office

Posted 1 week ago

Apply

10.0 years

0 Lacs

Kolkata, West Bengal, India

On-site

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY GDS – Data and Analytics (D&A) – Cloud Architect - Manager As part of our EY-GDS D&A (Data and Analytics) team, we help our clients solve complex business challenges with the help of data and technology. We dive deep into data to extract the greatest value and discover opportunities in key business and functions like Banking, Insurance, Manufacturing, Healthcare, Retail, Manufacturing and Auto, Supply Chain, and Finance. The opportunity We’re looking for SeniorManagers (GTM +Cloud/ Big Data Architects) with strong technology and data understanding having proven delivery capability in delivery and pre sales. This is a fantastic opportunity to be part of a leading firm as well as a part of a growing Data and Analytics team. Your Key Responsibilities Have proven experience in driving Analytics GTM/Pre-Sales by collaborating with senior stakeholder/s in the client and partner organization in BCM, WAM, Insurance. Activities will include pipeline building, RFP responses, creating new solutions and offerings, conducting workshops as well as managing in flight projects focused on cloud and big data. Need to work with client in converting business problems/challenges to technical solutions considering security, performance, scalability etc. [ 10- 15 years] Need to understand current & Future state enterprise architecture. Need to contribute in various technical streams during implementation of the project. Provide product and design level technical best practices Interact with senior client technology leaders, understand their business goals, create, architect, propose, develop and deliver technology solutions Define and develop client specific best practices around data management within a Hadoop environment or cloud environment Recommend design alternatives for data ingestion, processing and provisioning layers Design and develop data ingestion programs to process large data sets in Batch mode using HIVE, Pig and Sqoop, Spark Develop data ingestion programs to ingest real-time data from LIVE sources using Apache Kafka, Spark Streaming and related technologies Skills And Attributes For Success Architect in designing highly scalable solutions Azure, AWS and GCP. Strong understanding & familiarity with all Azure/AWS/GCP /Bigdata Ecosystem components Strong understanding of underlying Azure/AWS/GCP Architectural concepts and distributed computing paradigms Hands-on programming experience in Apache Spark using Python/Scala and Spark Streaming Hands on experience with major components like cloud ETLs,Spark, Databricks Experience working with NoSQL in at least one of the data stores - HBase, Cassandra, MongoDB Knowledge of Spark and Kafka integration with multiple Spark jobs to consume messages from multiple Kafka partitions Solid understanding of ETL methodologies in a multi-tiered stack, integrating with Big Data systems like Cloudera and Databricks. Strong understanding of underlying Hadoop Architectural concepts and distributed computing paradigms Experience working with NoSQL in at least one of the data stores - HBase, Cassandra, MongoDB Good knowledge in apache Kafka & Apache Flume Experience in Enterprise grade solution implementations. Experience in performance bench marking enterprise applications Experience in Data security [on the move, at rest] Strong UNIX operating system concepts and shell scripting knowledge To qualify for the role, you must have Flexible and proactive/self-motivated working style with strong personal ownership of problem resolution. Excellent communicator (written and verbal formal and informal). Ability to multi-task under pressure and work independently with minimal supervision. Strong verbal and written communication skills. Must be a team player and enjoy working in a cooperative and collaborative team environment. Adaptable to new technologies and standards. Participate in all aspects of Big Data solution delivery life cycle including analysis, design, development, testing, production deployment, and support Responsible for the evaluation of technical risks and map out mitigation strategies Experience in Data security [on the move, at rest] Experience in performance bench marking enterprise applications Working knowledge in any of the cloud platform, AWS or Azure or GCP Excellent business communication, Consulting, Quality process skills Excellent Consulting Skills Excellence in leading Solution Architecture, Design, Build and Execute for leading clients in Banking, Wealth Asset Management, or Insurance domain. Minimum 7 years hand-on experience in one or more of the above areas. Minimum 10 years industry experience Ideally, you’ll also have Strong project management skills Client management skills Solutioning skills What We Look For People with technical experience and enthusiasm to learn new things in this fast-moving environment What Working At EY Offers At EY, we’re dedicated to helping our clients, from start–ups to Fortune 500 companies — and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career The freedom and flexibility to handle your role in a way that’s right for you EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.

Posted 1 week ago

Apply

4.0 years

0 Lacs

Kolkata, West Bengal, India

On-site

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY GDS – Data and Analytics (D&A) – Data Engineer (Python) As part of our EY-GDS D&A (Data and Analytics) team, we help our clients solve complex business challenges with the help of data and technology. We dive deep into data to extract the greatest value and discover opportunities in key business and functions like Banking, Insurance, Manufacturing, Healthcare, Retail, Manufacturing and Auto, Supply Chain, and Finance. The opportunity We are currently seeking a seasoned Data Engineer with a good experience in Python to join our team of professionals. Key Responsibilities: Develop Data Lake tables leveraging AWS Glue and Spark for efficient data management. Implement data pipelines using Airflow, Kubernetes, and various AWS services Must Have Skills: Experience in deploying and managing data warehouses Advanced proficiency of at least 4 years in Python for data analysis and organization Solid understanding of AWS cloud services Proficient in using Apache Spark for large-scale data processing Skills and Qualifications Needed: Practical experience with Apache Airflow for workflow orchestration Demonstrated ability in designing, building, and optimizing ETL processes, data pipelines, and data architectures Flexible, self-motivated approach with strong commitment to problem resolution. Excellent written and oral communication skills, with the ability to deliver complex information in a clear and effective manner to a range of different audiences. Willingness to work globally and across different cultures, and to participate in all stages of the data solution delivery lifecycle, including pre-studies, design, development, testing, deployment, and support. Nice to have exposure to Apache Druid Familiarity with relational database systems, Desired Work Experience : A degree in computer science or a similar field What Working At EY Offers At EY, we’re dedicated to helping our clients, from start–ups to Fortune 500 companies — and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career The freedom and flexibility to handle your role in a way that’s right for you EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.

Posted 1 week ago

Apply

5.0 years

0 Lacs

Kolkata, West Bengal, India

On-site

At EY, we’re all in to shape your future with confidence. We’ll help you succeed in a globally connected powerhouse of diverse teams and take your career wherever you want it to go. Join EY and help to build a better working world. EY-Consulting - Data and Analytics – Senior - Clinical Integration Developer EY's Consulting Services is a unique, industry-focused business unit that provides a broad range of integrated services that leverage deep industry experience with strong functional and technical capabilities and product knowledge. EY’s financial services practice provides integrated Consulting services to financial institutions and other capital markets participants, including commercial banks, retail banks, investment banks, broker-dealers & asset management firms, and insurance firms from leading Fortune 500 Companies. Within EY’s Consulting Practice, Data and Analytics team solves big, complex issues and capitalize on opportunities to deliver better working outcomes that help expand and safeguard the businesses, now and in the future. This way we help create a compelling business case for embedding the right analytical practice at the heart of client’s decision-making. The opportunity We’re looking for Clinical Trials Integration Developers with 5+ years of experience in software development within the life sciences domain to support the integration of Medidata’s clinical trial systems across the Client R&D environment. This role offers the chance to build robust, compliant integration solutions, contribute to the design of clinical data workflows, and ensure interoperability across critical clinical applications. You will collaborate closely with business and IT teams, playing a key role in enhancing data flow, supporting trial operations, and driving innovation in clinical research. Your Key Responsibilities Design and implement integration solutions to connect Medidata clinical trial systems with other applications within the clinical data landscape. Develop and configure system interfaces using programming languages (e.g., Java, Python, C#) or integration middleware tools (e.g., Informatica, AWS, Apache NiFi). Collaborate with clinical business stakeholders and IT teams to gather requirements, define technical specifications, and ensure interoperability. Create and maintain integration workflows and data mappings that align with clinical trial data standards (e.g., CDISC, SDTM, ADaM). Ensure all development and implementation activities comply with GxP regulations and are aligned with validation best practices. Participate in agile development processes, including sprint planning, code reviews, testing, and deployment. Troubleshoot and resolve integration-related issues, ensuring stable and accurate data flow across systems. Document integration designs, workflows, and technical procedures to support long-term maintainability. Contribute to team knowledge sharing and continuous improvement initiatives within the integration space. Skills And Attributes For Success Apply a hands-on, solution-driven approach to implement integration workflows using code or middleware tools within clinical data environments. Strong communication and problem-solving skills with the ability to collaborate effectively with both technical and clinical teams. Ability to understand and apply clinical data standards and validation requirements when developing system integrations. To qualify for the role, you must have Experience: Minimum 5 years in software development within the life sciences domain, preferably in clinical trial management systems. Education: Must be a graduate preferrable BE/B.Tech/BCA/Bsc IT Technical Skills: Proficiency in programming languages such as Java, Python, or C#, and experience with integration middleware like Informatica, AWS, or Apache NiFi; strong background in API-based system integration. Domain Knowledge: Solid understanding of clinical trial data standards (e.g., CDISC, SDTM, ADaM) and data management processes; experience with agile methodologies and GxP-compliant development environments. Soft Skills: Strong problem-solving abilities, clear communication, and the ability to work collaboratively with clinical and technical stakeholders. Additional Attributes: Capable of implementing integration workflows and mappings, with attention to detail and a focus on delivering compliant and scalable solutions. Ideally, you’ll also have Hands-on experience with ETL tools and clinical data pipeline orchestration frameworks relevant to clinical research. Hands-on experience with clinical R&D platforms such as Oracle Clinical, Medidata RAVE, or other EDC systems. Proven experience leading small integration teams and engaging with cross-functional stakeholders in regulated (GxP) environments. What We Look For A Team of people with commercial acumen, technical experience and enthusiasm to learn new things in this fast-moving environment An opportunity to be a part of market-leading, multi-disciplinary team of 1400 + professionals, in the only integrated global transaction business worldwide. Opportunities to work with EY Consulting practices globally with leading businesses across a range of industries What Working At EY Offers At EY, we’re dedicated to helping our clients, from start–ups to Fortune 500 companies — and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career The freedom and flexibility to handle your role in a way that’s right for you EY | Building a better working world EY is building a better working world by creating new value for clients, people, society and the planet, while building trust in capital markets. Enabled by data, AI and advanced technology, EY teams help clients shape the future with confidence and develop answers for the most pressing issues of today and tomorrow. EY teams work across a full spectrum of services in assurance, consulting, tax, strategy and transactions. Fueled by sector insights, a globally connected, multi-disciplinary network and diverse ecosystem partners, EY teams can provide services in more than 150 countries and territories.

Posted 1 week ago

Apply

4.0 years

3 - 5 Lacs

Gurgaon

Remote

Job description About this role What are Aladdin and Aladdin Engineering? You will be working on BlackRock's investment operating system called Aladdin , which is used both internally within BlackRock and externally by many financial institutions. Aladdin combines sophisticated risk analytics with comprehensive portfolio management, trading, and operations tools on a single platform. It powers informed decision-making and creates a connective tissue for thousands of users investing worldwide. Our development teams are part of Aladdin Engineering . We collaborate to build the next generation of technology that transforms the way information, people, and technology intersect for global investment firms. We build and package tools that manage trillions in assets and support millions of financial instruments. We perform risk calculations and process millions of transactions for thousands of users worldwide every day. Your Team: The Database Hosting Team is a key part of Platform Hosting Services , which operates under the broader Aladdin Engineering group. Hosting Services is responsible for managing the reliability, stability, and performance of the firm's financial systems, including Aladdin, and ensuring its availability to our business partners and customers. We are a globally distributed team, spanning multiple regions, providing engineering and operational support for online transaction processing, data warehousing, data replication, and distributed data processing platforms. Your Role and Impact: Data is the backbone of any world-class financial institution. The Database Operations Team ensures the resiliency and integrity of that data while providing instantaneous access to a large global user base at BlackRock and across many institutional clients. As specialists in database technology, our team is involved in every aspect of system design, implementation, tuning, and monitoring, using a wide variety of industry-leading database technologies. We also develop code to provide analysis, insights, and automate our solutions at scale. Although our specialty is database technology, to excel in our role, we must understand the environment in which our technology operates. This includes understanding the business needs, application server stack, and interactions between database software, operating systems, and host hardware to deliver the best possible service. We are passionate about performance and innovation. At every level of the firm, we embrace diversity and offer flexibility to enhance work-life balance. Your Responsibilities: The role involves providing operations, development, and project support within the global database environment across various platforms. Key responsibilities include: Operational Support for Database Technology: Engineering, administration, and operations of OLTP, OLAP, data warehousing platforms, and distributed No-SQL systems. Collaboration with infrastructure teams, application developers, and business teams across time zones to deliver high-quality service to Aladdin users. Automation and development of database operational, monitoring, and maintenance toolsets to achieve scalability and efficiency. Database configuration management, capacity and scale management, schema releases, consistency, security, disaster recovery, and audit management. Managing operational incidents, conducting root-cause analysis, resolving critical issues, and mitigating future risks. Assessing issues for severity, troubleshooting proactively, and ensuring timely resolution of critical system issues. Escalating outages when necessary, collaborating with Client Technical Services and other teams, and coordinating with external vendors for support. Project-Based Participation: Involvement in major upgrades and migration/consolidation exercises. Exploring and implementing new product features. Contributing to performance tuning and engineering activities. Contributing to Our Software Toolset: Enhancing monitoring and maintenance utilities in Perl, Python, and Java. Contributing to data captures to enable deeper system analysis. Qualifications: B.E./B.Tech/MCA or another relevant engineering degree from a reputable university. 4+ years of proven experience in Data Administration or a similar role. Skills and Experience: Enthusiasm for acquiring new technical skills. Effective communication with senior management from both IT and business areas. Understanding of large-scale enterprise application setups across data centers/cloud environments. Willingness to work weekends on DBA activities and shift hours. Experience with database platforms like SAP Sybase , Microsoft SQL Server , Apache Cassandra , Cosmos DB, PostgreSQL, and data warehouse platforms such as Snowflake , Greenplum. Exposure to public cloud platforms such as Microsoft Azure, AWS, and Google Cloud. Knowledge of programming languages like Python, Perl, Java, Go; automation tools such as Ansible/AWX; source control systems like GIT and Azure DevOps. Experience with operating systems like Linux and Windows. Strong background in supporting mission-critical applications and performing deep technical analysis. Flexibility to work with various technologies and write high-quality code. Exposure to project management. Passion for interactive troubleshooting, operational support, and innovation. Creativity and a drive to learn new technologies. Data-driven problem-solving skills and a desire to scale technology for future needs. Operating Systems: Familiarity with Linux/Windows. Proficiency with shell commands (grep, find, sed, awk, ls, cp, netstat, etc.). Experience checking system performance metrics like CPU, memory, and disk usage on Unix/Linux. Other Personal Characteristics: Integrity and the highest ethical standards. Ability to quickly adjust to complex data and information, displaying strong learning agility. Self-starter with a commitment to superior performance. Natural curiosity and a desire to always learn. If this excites you, we would love to discuss your potential role on our team! Our benefits To help you stay energized, engaged and inspired, we offer a wide range of benefits including a strong retirement plan, tuition reimbursement, comprehensive healthcare, support for working parents and Flexible Time Off (FTO) so you can relax, recharge and be there for the people you care about. Our hybrid work model BlackRock’s hybrid work model is designed to enable a culture of collaboration and apprenticeship that enriches the experience of our employees, while supporting flexibility for all. Employees are currently required to work at least 4 days in the office per week, with the flexibility to work from home 1 day a week. Some business groups may require more time in the office due to their roles and responsibilities. We remain focused on increasing the impactful moments that arise when we work together in person – aligned with our commitment to performance and innovation. As a new joiner, you can count on this hybrid model to accelerate your learning and onboarding experience here at BlackRock. About BlackRock At BlackRock, we are all connected by one mission: to help more and more people experience financial well-being. Our clients, and the people they serve, are saving for retirement, paying for their children’s educations, buying homes and starting businesses. Their investments also help to strengthen the global economy: support businesses small and large; finance infrastructure projects that connect and power cities; and facilitate innovations that drive progress. This mission would not be possible without our smartest investment – the one we make in our employees. It’s why we’re dedicated to creating an environment where our colleagues feel welcomed, valued and supported with networks, benefits and development opportunities to help them thrive. For additional information on BlackRock, please visit @blackrock | Twitter: @blackrock | LinkedIn: www.linkedin.com/company/blackrock BlackRock is proud to be an Equal Opportunity Employer. We evaluate qualified applicants without regard to age, disability, family status, gender identity, race, religion, sex, sexual orientation and other protected attributes at law. Job Requisition # R255448

Posted 1 week ago

Apply

3.0 years

7 - 9 Lacs

Gurgaon

Remote

Job description About this role BlackRock is a global leader in investment management, risk management, and advisory services for institutional and retail clients. We help clients achieve their goals and overcome challenges with a range of products, including separate accounts, mutual funds, iShares® (exchange-traded funds), and other pooled investment vehicles. We also offer risk management, advisory, and enterprise investment system services to a broad base of institutional investors through BlackRock Solutions®. Headquartered in New York City, as of February 5, 2025, we handle approximately $11.5 trillion in assets under management (AUM) and have around 19,000 employees in offices across 38 countries, with a significant presence in key global markets, including North and South America, Europe, Asia, Australia, the Middle East, and Africa. Aladdin Data: When BlackRock was founded in 1988, the goal was to combine financial services with innovative technology. Today, BlackRock is a leading FinTech platform for investment management and technology services globally. Data is central to the Aladdin platform, differentiating us through our ability to consume, store, analyze, and gain insights from it. The Aladdin Data team maintains a pioneering data platform that delivers highquality data to users, including investors, operations staff, data scientists, and engineers. Our aim is to provide consistent, high-quality data while evolving our platform to support the firm's growth. We build high-performance data pipelines, enable data discovery and consumption, and continually enhance our data storage capabilities. Studio Self-service Front-end Engineering: Our team develops full-stack web applications for vendor data self-service, client data configuration, pipelines, and workflows. We support over a thousand internal users and hundreds of clients. We manage the data toolkit, including client-facing data requests, modeling, configuration management, ETL tools, CRUD applications, customized workflows, and back-end APIs to deliver exceptional client and user experiences with intuitive tools and excellent UX. Job Description and Responsibilities: • Design, build, and maintain various front-end and corresponding back-end platform components, working with Product and Program Managers. • Implement new user interfaces and business functionalities to meet evolving business and customer requirements, working with end users, with clear and concise documentation. • Analyze and improve the performance of applications and related operational workflows to improve efficiency and throughput. • Diagnose, research, and resolve software defects. • Ensure software stability through documentation, code reviews, regression, unit, and user acceptance testing for smooth production operations. • Lead all aspects of level 2 & 3 application support, ensuring smooth operation of existing processes and meeting new business opportunities. • Be a self-starter and work with minimal direction in a globally distributed team. Role Essentials: • A passion for engineering highly available, performant full-stack applications with a "Student of Markets and Technology" attitude. • Bachelor's or master's degree or equivalent experience in computer science or engineering. • 3+ years of professional experience working in teams. • VP-level candidates should have experience leading teams delivering critical applications. • Experience in full-stack user-facing application development using web technologies (Angular, React, JavaScript) and Java-based REST API (Spring framework). • Experience in testing frameworks such as Protractor, TestCafe, Jest. • Knowledge in relational database development and at least one NoSQL Database (e.g., Apache Cassandra, MongoDB, etc.). • Knowledge of software development methodologies (analysis, design, development, testing) and a basic understanding of Agile/Scrum methodology and practices. If interested, all candidates must apply through your career services office. Additionally, you MUST apply online through our career site at www.BlackRock.com and submit a copy of your most recent resume and cover letter. BlackRock is proud to be an Equal Opportunity/Affirmative Action Employer. We celebrate diversity and are committed to crafting an inclusive environment for all employees Our benefits To help you stay energized, engaged and inspired, we offer a wide range of benefits including a strong retirement plan, tuition reimbursement, comprehensive healthcare, support for working parents and Flexible Time Off (FTO) so you can relax, recharge and be there for the people you care about. Our hybrid work model BlackRock’s hybrid work model is designed to enable a culture of collaboration and apprenticeship that enriches the experience of our employees, while supporting flexibility for all. Employees are currently required to work at least 4 days in the office per week, with the flexibility to work from home 1 day a week. Some business groups may require more time in the office due to their roles and responsibilities. We remain focused on increasing the impactful moments that arise when we work together in person – aligned with our commitment to performance and innovation. As a new joiner, you can count on this hybrid model to accelerate your learning and onboarding experience here at BlackRock. About BlackRock At BlackRock, we are all connected by one mission: to help more and more people experience financial well-being. Our clients, and the people they serve, are saving for retirement, paying for their children’s educations, buying homes and starting businesses. Their investments also help to strengthen the global economy: support businesses small and large; finance infrastructure projects that connect and power cities; and facilitate innovations that drive progress. This mission would not be possible without our smartest investment – the one we make in our employees. It’s why we’re dedicated to creating an environment where our colleagues feel welcomed, valued and supported with networks, benefits and development opportunities to help them thrive. For additional information on BlackRock, please visit @blackrock | Twitter: @blackrock | LinkedIn: www.linkedin.com/company/blackrock BlackRock is proud to be an Equal Opportunity Employer. We evaluate qualified applicants without regard to age, disability, family status, gender identity, race, religion, sex, sexual orientation and other protected attributes at law. Job Requisition # R255972

Posted 1 week ago

Apply

5.0 - 8.0 years

6 Lacs

Mohali

On-site

About the Role We are seeking a highly skilled and motivated Senior Java Developer with 5–8 years of experience to join our engineering team. The ideal candidate will have strong backend development expertise, a deep understanding of microservices, and a solid grasp of agile methodologies. This is a hands-on role focused on designing, developing, and maintaining scalable applications in a collaborative, fast-paced environment. Key Responsibilities Design, develop, test, and maintain scalable Java-based applications using Java 8 or higher and Spring Boot. Build RESTful APIs and microservices with clean, maintainable code. Work with SQL and NoSQL databases to manage data storage and retrieval effectively. Collaborate with cross-functional teams in an Agile/Scrum environment. Write unit and integration tests using JUnit, Mockito, and apply Test-Driven Development (TDD) practices. Manage source code with Git and build applications using Maven. Create and manage Docker containers for development and deployment. Troubleshoot and debug production issues in Unix/Linux environments. Participate in code reviews and ensure adherence to best practices. Must-Have Qualifications 5–8 years of hands-on experience with Java 8 or higher . Strong experience with Spring Boot and microservices architecture. Proficiency in Git , Maven , and Unix/Linux . Solid understanding of SQL and NoSQL databases. Experience working in Agile/Scrum teams. Hands-on experience with JUnit , Mockito , and TDD . Working knowledge of Docker and containerized deployments. Good to Have Experience with Apache Kafka for event-driven architecture. Familiarity with Ansible and/or Terraform for infrastructure automation. Knowledge of Docker Swarm or container orchestration tools. Exposure to Jenkins or other CI/CD tools. Proficiency in Bash scripting for automation and environment setup. Job Type: Full-time Pay: From ₹600,000.00 per year Benefits: Flexible schedule Health insurance Life insurance Provident Fund Ability to commute/relocate: Mohali, Punjab: Reliably commute or planning to relocate before starting work (Required) Experience: Java: 5 years (Required) Work Location: In person

Posted 1 week ago

Apply

3.0 years

4 - 7 Lacs

Mohali

On-site

Experience - 3+ years Location -Mohali (Candidates from nearby location are preferred) Job Overview – DOT NET Developer Skillset: .Net Framework, .Net Core, C#, MVC, Web API, Dapper, Entity framework, MS SQL Server, GIT Angular 12+ (Front-End), HTML, CSS, JavaScript, jQuery Microsoft Azure Technologies (Resource Groups, Application Services, Azure SQL, Azure Web Jobs, Azure Automation & Runbooks, Azure Power Shell etc-2) Experience in MVC designing and coding. Key Responsibilities of candidate will be: · Be a key part of the full product development life cycle of software applications · Ability to prototype solution quickly and analyze / compare multiple solutions and products based on requirements · Always concentrated on Performance, Scalability and Security. · Hands-on experience in building REST based solutions conforming to HTTP standards and knowledge of working of TLS / SSL. · Proficiency with technologies like C#, ASP.NET, ASP.NET MVC, ASP.NET Core, JavaScript, Web API, and REST Web Services. · Working knowledge of various Client-Side Frameworks –jQuery (Kendo UI, AngularJS, ReactJS will be additional knowledge) · Experience with Cloud services offered by MS Azure · Understanding and analyzing the non-functional requirements for the system and how does the architecture reflect them · In depth knowledge of encoding and encryption techniques and their usage. · Extensive knowledge of different industry standards like OAuth2.0, SAML2.0, OpenID Connect, Open API, SOAP, HTTP, HTTPS · Proficiency with the Development tools – Visual Studio · Proficiency with the Application Servers – IIS, Apache (Considering .NET Core framework is Platform independent). · Experience in designing and implementing applications utilizing databases – MySQL, MS SQL Server, Oracle, AWS Aurora, Azure database for MySQL and non-relational databases · Strong Problem Solving and analytical skills · Experience on Micro-Services based architecture is a plus Job Types: Full-time, Permanent Pay: ₹35,000.00 - ₹65,000.00 per month Experience: .NET: 3 years (Preferred) Work Location: In person

Posted 1 week ago

Apply

3.0 years

0 Lacs

India

On-site

Why Join US Our Culture Patoliya's culture is built on team collaboration, high performance, and opportunity. Everyone is encouraged to bring their skills & ideas, beliefs, backgrounds, talents, capabilities, and lifestyles to work. At Patoliya, you'll experience a finely balanced life and work culture that will help you make a difference and grow! How Work Style Hardwork, dedication, and a holistic approach by our employees to create an impactful outcome are our key strengths. We value passion and determination. It's not just about who you are today, but what you can be! So we don't just focus on an individual's skill and talent. Their potential and willingness to learn also matter. Continuous Learning Focus on training the team to grow their current profile and develop skills, competencies and talent that foster the skills for future roles and responsibilities. Opportunity With Us How is it like to be in Patoliya? 5 Days a Week Quarterly Events and Outings for Lunch/Dinner Flexible Timings Opportunity to Work With a Fun, Amazing and Talented Team Birthday and Festival Celebration Great Place to Learn with Phenomenal Growth Opportunity Annual Trip Employee Engagement Activities Financial Perks Health Insurance Performance Awards Employee Referral Program On-Time Salary Highest Payroll in the Region Overtime Pay Leave Encashment Cloud Engineer 3+ Years (2 Opening) Required Skills Communication, Apache Flink, Google Cloud Platform (GCP), Cloud Engineer, AWS services, Java, Problem-Solving, Code quality

Posted 1 week ago

Apply

12.0 years

0 Lacs

Noida

On-site

Position Summary AEM Technical Architect (TA) position is a client-facing role requiring both technical and business/marketing knowledge and skills. TA works to gather & understand Client’s unique business requirements and provide expert guidance by sharing best practices & recommendations to our Customer/Implementation Partners in building customized solutions to meet their business reporting needs through AEM platform. TA also performs quality checks to ensure that the implementation cycle follows industry best practices, flag all technical issues and highlight risks when they arise. TA works with Clients to strategize and drive business value from the platform and enable them to adopt & scale-up in their maturity roadmap. It is a technical advisory role with certain hands-on support and requires a solid technical acumen in digital platform implementation and involves constant customer interaction. What you'll do Be a recognized expert/SME for internal and regional stakeholders. Take leadership during project delivery and own Project Management responsibilities. Act as a Team Lead for small to large, multi-solution consulting engagements which may involve interactions with multiple teams from, Client, or partner organizations Build trusted advisor relationships with our Clients & implementation Partners. Adapt to and work effectively with a variety of clients and in challenging situations, establishing credibility and trust quickly. Work on own initiative without a need for directions for most consulting activities. Gain understanding of client business requirements, key performance indicators and other functional and/or technical use cases. Review overall solution architecture and custom design solutions for AEM (Sites, Assets and Forms), technical approach and go-live readiness. Review assessments & recommendations document and liaise with technical consultants. Communicate effectively to Customer/Implementation Partner teams on AEM assessments & recommendations, gaps and risks. Provide advisory to key stakeholders with industry best practices & recommendations throughout the implementation process to drive Customer success and ROI. Interact frequently with Client/Implementation Partner teams - marketers, analysts, web developers, QA team, and C-level executives, mainly via conference calls or emails. Manage customer expectations of response time & issue resolution and keep projects on schedule and within scope. Troubleshoot and reproduce the technical problems reported by customers and define workarounds. Effectively analyze complex project issues, devise optimal solutions, and facilitate the recommendations to the Clients and Partners. Proactively maintain the highest level of technical expertise by staying current on DX technologies and solutions through internally and externally available learning opportunities as well as self-study. Provide thought leadership to the team and wider consulting community helping to set future strategic direction. Participate within the technical community to develop and share best practices and processes. Enable existing/new team members with new product features, delivery processes, project-based learnings and support with any issues or queries. Foster teamwork among consultants and cross functional teams. Technical writing and PowerPoint presentation creation. What you need to succeed Must Have – 12+ years of experience as a client-facing consultant with strong experience in AEM implementation & understanding in areas – o UI technologies like JQuery, Java Script, HTML 5, CSS. o Technologies like Java EE, Servlets, JSP, Tag libraries, and JSTL skills. o Dispatcher Configuration, Clustering, CRX repository, Workflows, Replication and Performance management. o Application development, distributed application development and Internet/Intranet based database applications. o AEM sites/assets/forms deployment and migration. o AEM Backend Development like Sling Servlets, OSGi Components and JCR Queries o Core frameworks such as Apache Sling and Apache Felix. o CI/CD tools like Maven, Jenkins. o Code Quality and Security tools like SONAR. o Touch UI, Sightly (HTL) and Sling Models. o Software design patterns Leading consulting teams in Technical Architect capacity Problem analysis and resolution of technical problems. Experience working effectively on multiple Consulting engagements. Ability to handle clients professionally during all interfaces. Experience presenting in front of various Client-side audiences. Exceptional organizational, presentation, and communication skills - both verbal and written. Must be self-motivated, responsive, professional and dedicated to customer success. Possess an innovative, problem-solving, and solutions-oriented mindset. Demonstrated ability to learn quickly, be a team player, and manage change effectively. Preferably a degree in Computer Science or Engineering. Preference will be given for – Experience as Techno Managerial role in a large consulting organization with project/people management responsibilities. Knowledge on latest AEM features and on new cloud technology – AEMaaCS. Experience on Cloud Manager deployment tool. Certified ScrumMaster and/or PMP Certification. Knowledge on Agile methodologies. Good understanding of integration of AEM with other DX solutions – Commerce, Analytics, Target, Audience Manager etc. would be plus. Experience presenting in front of various technical and business audiences. Ability to work in extended hours to overlap with North America timings Job Type: Full-time Application Question(s): How many years of experience as a client-facing consultant with strong experience in AEM implementation & understanding in areas – o UI technologies like JQuery, Java Script, HTML 5, CSS? Work Location: In person

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies