Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
3.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Description Support, configure and manage network security devices (Palo Alto, Fortigate and Cisco firewalls, with some other vendors also included) Maintain and improve documentation of customers’ network environments Provide Level 2/3 support and troubleshooting to resolve technical issues Work within change management policies to ensure success of changes Implement security tools, policies and procedures as appropriate Co-ordinate with vendors and other IT teams to resolution Qualifications Technical skills and experience Cisco CCNP (BGP Enterprise level) or Palo Alto PCNSE certification >3 years relevant experience supporting network technologies Experience supporting and configuring a range of network devices and technologies (firewalls, switches, load balancers, VPNs etc.) Excellent communications skills, written and verbal Experience of Azure and/or AWS, Enterprise Networking and Data Centre Environments Experience conducting network audits Beneficial Skills And Experience IT Degree or equivalent combination of qualifications and experience Experience across a range of network vendors - Cisco, Palo Alto, Juniper, FortiGate Learn new technologies fast ITIL familiarization The Candidate Decisive, dynamic, and capable of delivering to a high standard despite constraints that may be in place Conscientious, trustworthy, and capable of organising and delivering on tasks with little direction Strong troubleshooting and communications skills are an absolute necessity Capable of adjusting their approach for the varied customers supported Process-oriented with great attention to detail Additional Information At Version 1, we believe in providing our employees with a comprehensive benefits package that prioritises their well-being, professional growth, and financial stability. One of our standout advantages is the ability to work with a hybrid schedule along with business travel, allowing our employees to strike a balance between work and life. We also offer a range of tech-related benefits, including an innovative Tech Scheme to help keep our team members up-to-date with the latest technology. We prioritise the health and safety of our employees, providing private medical and life insurance coverage, as well as free eye tests and contributions towards glasses. Our team members can also stay ahead of the curve with incentivized certifications and accreditations, including AWS, Microsoft, Oracle, and Red Hat. Our employee-designed Profit Share scheme divides a portion of our company's profits each quarter amongst employees. We are dedicated to helping our employees reach their full potential, offering Pathways Career Development Quarterly, a programme designed to support professional growth. Show more Show less
Posted 11 hours ago
0 years
0 Lacs
Bengaluru East, Karnataka, India
On-site
Core experience in SAP BASIS and HANA administration 1. Proficiency in SAP products: S/4 HANA, ECC 6.0 EHP5, SAP PI/PO, SAP BW, Solution Manager, portal system, Fiori system, BODS/BOBJ, LaMa, and SLT 2. Experience in Cloud ALM is essential, experience in BTP and Rise with SAP is preferred Managing SAP applications on cloud environments (AWS, Azure, SAP, and GCP) 4. Experience in migrations from on-premise to on-premise/cloud environments 5. Ability to work on both OS (Linux/Unix) tasks for SAP as well as SAP Basis runtime administration 6. Proficiency in SAP NetWeaver ABAP Basis and SAP NetWeaver Java Basis tasks 7. Extensive experience in HANA System replication, root cause analysis, and fixing issues related to system replication, performance, memory/CPU bottlenecks, HANA failovers, and system unavailability 8. A day in the life of an Infoscion As part of the Infosys consulting team, your primary role would be to actively aid the consulting team in different phases of the project including problem definition, effort estimation, diagnosis, solution generation and design and deployment You will explore the alternatives to the recommended solutions based on research that includes literature surveys, information available in public domains, vendor evaluation information, etc. and build POCs You will create requirement specifications from the business needs, define the to-be-processes and detailed functional designs based on requirements. You will support configuring solution requirements on the products; understand if any issues, diagnose the root-cause of such issues, seek clarifications, and then identify and shortlist solution alternatives You will also contribute to unit-level and organizational initiatives with an objective of providing high quality value adding solutions to customers. If you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! Hands-on experience and good knowledge of SSO (both Kerberos and SAML based SSO), configuring EWA reports, and Charm configuration 10. Experience in handling SSL scenarios, renewing, updating certificates, and troubleshooting certificate-related issues 11. Experience in system refresh, HANA DB upgrades, kernel updates, SPS updates, release upgrades, etc 12. Working knowledge of integrating 3rd Party tools with SAP systems like Esker, Taulia, Vertex, OpenText, Libelle, NLS (IQ), PowerBI PowerApps, Denodo, HVR, SuccessFactors, Splunk, and UC4 9. Show more Show less
Posted 11 hours ago
5.0 years
0 Lacs
Gandhinagar, Gujarat, India
On-site
smartSense is seeking a senior developer, who will be working with cross-functional teams to develop and deliver projects and adopt the best practices of Test-Driven Development to guarantee the robustness and stability of the code produced. We are looking for a passionate one who loves to write code with best coding practices and can also lead the Architecture Designing front. Responsibilities:- Create Solution Architecture including software and multiple Java Frameworks. Design Microservices based Architecture and Management of microservices Advanced knowledge of Spring, Spring boot, Spring data JPA, String Security, Spring could gateway Can write a complex algorithm with multi-threading as part of the feature Manage risk identification and risk mitigation strategies associated with the architecture Advanced understanding of Agile Methodologies, including estimations Able to critically analyze different implementations and select the most suitable ones Create high-level implementation documents and support team to create low-level technical documents Must be able to take care of application-wide tasks such as performance, security, concurrency, transaction management, session management, caching, validation Must have good knowledge of Rest API, WebSocket, OAuth, OpenID, and Java best practices Must have good knowledge of AWS/Azure Cloud Platforms & can use the services optimally Mandatory skills: Spring Boot, Hibernate, JPA, J2EE, Struts, Documentation, Git, MySQL Good to have Skills: Agile Development, System Architecture, Client communication Experience: Minimum 5+ years of experience needed in Java development and at least 2 years of experience in Designing Web Architecture. Qualification: Bachelor or Masters in Software Engineering Preferred: BE/B.Tech/MCA/M.sc/B.sc Show more Show less
Posted 11 hours ago
3.0 - 5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About Position: Are you a passionate backend engineer looking to make a significant impact? Join our cross-functional, distributed team responsible for building and maintaining the core backend functionalities that power our customers. You’ll be instrumental in developing scalable and robust solutions, directly impacting on the efficiency and reliability of our platform. This role offers a unique opportunity to work on cutting-edge technologies and contribute to a critical part of our business, all within a supportive and collaborative environment. Role: Junior .Net Engineer Location: Hyderabad Experience: 3 to 5 years Job Type: Full Time Employment What You'll Do: Implement feature/module as per design and requirements shared by Architect, Leads, BA/PM using coding best practices Develop, and maintain microservices using C# and .NET Core perform unit testing as per code coverage benchmark. Support testing & deployment activities Micro-Services - containerized micro-services (Docker/Kubernetes/Ansible etc.) Create and maintain RESTful APIs to facilitate communication between microservices and other components. Analyze and fix defects to develop high standard stable codes as per design specifications. Utilize version control systems (e.g., Git) to manage source code. Requirement Analysis: Understand and analyze functional/non-functional requirements and seek clarifications from Architect/Leads for better understanding of requirements. Participate in estimation activity for given requirements. Coding and Development: Writing clean and maintainable code using best practices of software development. Make use of different code analyzer tools. Follow TTD approach for any implementation. Perform coding and unit testing as per design. Problem Solving/ Defect Fixing: Investigate and debug any defect raised. Finding root causes, finding solutions, exploring alternate approaches and then fixing defects with appropriate solutions. Fix defects identified during functional/non-functional testing, during UAT within agreed timelines. Perform estimation for defect fixes for self and the team. Deployment Support: Provide prompt response during production support Expertise You'll Bring: Language – C# Visual Studio Professional Visual Studio Code .NET Core 3.1 onwards Entity Framework with code-first approach Dependency Injection Error Handling and Logging SDLC Object-Oriented Programming (OOP) Principles SOLID Principles Clean Coding Principles Design patterns API Rest API with token-based Authentication & Authorization Postman Swagger Database Relational Database: SQL Server/MySQL/ PostgreSQL Stored Procedures and Functions Relationships, Data Normalization & Denormalization, Indexes and Performance Optimization techniques Preferred Skills Development Exposure to Cloud: Azure/GCP/AWS Code Quality Tool – Sonar Exposure to CICD process and tools like Jenkins etc., Good understanding of docker and Kubernetes Exposure to Agile software development methodologies and ceremonies Benefits: Competitive salary and benefits package Culture focused on talent development with quarterly promotion cycles and company-sponsored higher education and certifications Opportunity to work with cutting-edge technologies Employee engagement initiatives such as project parties, flexible work hours, and Long Service awards Annual health check-ups Insurance coverage: group term life, personal accident, and Mediclaim hospitalization for self, spouse, two children, and parents Inclusive Environment: Persistent Ltd. is dedicated to fostering diversity and inclusion in the workplace. We invite applications from all qualified individuals, including those with disabilities, and regardless of gender or gender preference. We welcome diverse candidates from all backgrounds. We offer hybrid work options and flexible working hours to accommodate various needs and preferences. Our office is equipped with accessible facilities, including adjustable workstations, ergonomic chairs, and assistive technologies to support employees with physical disabilities. If you are a person with disabilities and have specific requirements, please inform us during the application process or at any time during your employment. We are committed to creating an inclusive environment where all employees can thrive. Our company fosters a value-driven and people-centric work environment that enables our employees to: Accelerate growth, both professionally and personally Impact the world in powerful, positive ways, using the latest technologies Enjoy collaborative innovation, with diversity and work-life wellbeing at the core Unlock global opportunities to work and learn with the industry’s best Let’s unleash your full potential at Persistent “Persistent is an Equal Opportunity Employer and prohibits discrimination and harassment of any kind.” Show more Show less
Posted 11 hours ago
3.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Newfold Digital is a leading web technology company serving nearly seven million customers globally. Established in 2021 through the combination of leading web services providers Endurance Web Presence and Web.com Group, our portfolio of brands includes: Bluehost, BigRock, ResellerClub ,CrazyDomains, HostGator, Network Solutions, Register.com, Web.com and many others. We help customers of all sizes build a digital presence that delivers results. With our extensive product offerings and personalized support, we take pride in collaborating with our customers to serve their online presence needs. What You Will Do Newfold is seeking an experienced Senior SDET with strong skills in Testing, Automation, DevOps, and Programming to join our team. The Senior SDET will be responsible for architecting and building test frameworks for real-time, large-scale applications, developing, and executing test cases, and driving automation for microservices and micro-frontends. The ideal candidate should have strong programming skills, experience with automated testing tools and frameworks, and a deep understanding of software development best practices. Design, develop, and maintain automated test suites for web and mobile applications. Collaborate with development teams to ensure high-quality software products. Develop and execute test cases to identify defects and improve overall software quality. Design and implement test automation frameworks using modern technologies such as Selenium, Appium, Robot Framework, WebDriverIO and JMeter or likewise. Integrating projects with automation CI/CD servers like Bamboo or Jenkins Monitoring timely builds and managing product deployments efficiently Work with cross-functional teams to establish quality benchmarks and metrics. Identify, report, and track defects using JIRA or similar tools. Review and analyze test results to identify trends and areas for improvement. Participate in code reviews and provide feedback to improve code quality and testability. Research and evaluate new testing tools and methodologies to improve testing efficiency and effectiveness. Mentor and provide guidance to other SDETs on the team. Who You Are Bachelor/Master's degree in Computer Science or related disciplines (BTech / BE / ME / BCA /MCA ) 3 to 5 years of experience as an SDET or Software Engineer with a strong focus on testing Experience in architecting & building test frameworks for real-time, large-scale applications (good to know) Curiosity to find out how things work to discover how to break code Strong QE skills in test planning, including designing and executing test cases, bug isolation, bug report writing and troubleshooting and test case management Experience in white box testing and/or Test-Driven Development Strong programming skills in languages such as Python, Java and/or JavaScript Experience with automated testing tools such as Selenium, Appium, Robot Framework, WebDriverIO and JMeter or similar (mandatory) Experience with test automation frameworks such as Cypress, TestNG, JUnit, or PyTest (good to know) Knowledge of software development best practices, including Agile methodologies Experience with testing web applications, APIs, and mobile applications Strong analytical and problem-solving skills with excellent communication & collaboration skills Good understanding of the product architectures that are based on microservices & micro- frontends Strong analytical and problem-solving skills with the ability to think creatively and approach problems from different angles Familiarity with a variety of data analysis and machine learning techniques, including linear regression, logistic regression, classification, clustering, and dimensionality reduction Experience working with large datasets and high-performance computing environments Excellent communication and presentation skills, as well as the ability to collaborate effectively with others Understanding of the business domain and the ability to translate business problems into technical solutions Experience in building and deploying machine learning models into production. Familiarity with cloud computing platforms such as OCI, AWS, Azure, or GCP. Why you’ll love us. We’ve evolved: We provide three work environment scenarios. You can feel like a Newfolder in a work-from-home, hybrid, or work-from-the-office environment. Work-life balance: Our work is thrilling and meaningful, but we know balance is key to living well. We celebrate one another’s differences: We’re proud of our culture of diversity and inclusion. We foster a culture of belonging. Our company and customers benefit when employees bring their authentic selves to work. We have programs that bring us together on important issues and provide learning and development opportunities for all employees. We have 20+ affinity groups where you can network and connect with Newfolders globally. We care about you : At Newfold, taking care of our employees is our top priority. We make sure that cutting edge benefits are in place for you. Some of the benefits you will have: We have partnered with some of the best insurance providers to provide you excellent Health Insurance options, Education/ Certification Sponsorships to give you a chance to further your knowledge, Flexi-leaves to take personal time off and much more. Building a community one domain at a time, one employee at a time. All our employees are eligible for a free domain and WordPress blog as we sponsor the domain registration costs. Where can we take you? We’re fans of helping our employees learn different aspects of the business, be challenged with new tasks, be mentored, and grow their careers. Unfold new possibilities with #teamnewfold! This Job Description includes the essential job functions required to perform the job described above, as well as additional duties and responsibilities. This Job Description is not an exhaustive list of all functions that the employee performing this job may be required to perform. The Company reserves the right to revise the Job Description at any time, and to require the employee to perform functions in addition to those listed above. Show more Show less
Posted 11 hours ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Talent500 is hiring for one of our client About American Airlines: To Care for People on Life's Journey®. We have a relentless drive for innovation and excellence. Whether you're engaging with customers at the airport or advancing our IT infrastructure, every team member plays a vital role in shaping the future of travel. At American’s Tech Hubs, we tackle complex challenges and pioneer cutting-edge technologies that redefine the travel experience. Our vast network and diverse customer base offer unique opportunities for engineers to solve real-world problems on a grand scale. Join us and immerse yourself in a dynamic, tech-driven environment where your creativity and unique strengths are celebrated. Experience the excitement of being at the forefront of technological innovation, where every day brings new opportunities to make a meaningful impact. About Tech Hub in India: American’s Tech Hub in Hyderabad, India, is our newest location and home to team members who drive technical innovation and engineer unrivalled digital products to best serve American’s customers and team members. With U.S. tech hubs in Dallas-Fort Worth, Texas and Phoenix, Arizona, our new location in Hyderabad, India, positions American to deliver industry-leading technology solutions that create a world-class customer experience. Why you will love this job: As one diverse, high-performing team dedicated to technical excellence, you will focus relentlessly on delivering unrivaled digital products that drive a more reliable and profitable airline. The Software domain refers to the area within Information Technology that focuses on the development, deployment, management, and maintenance of software applications that support business processes and user needs. This includes development, application lifecycle management, requirement analysis, QA, security & compliance, and maintaining the applications and infrastructure. What you will do: As noted above, this list is intended to reflect the current job but there may be additional functions that are not referenced. Management will modify the job or require other tasks be performed whenever it is deemed appropriate to do so, observing, of course, any legal obligations including any collective bargaining obligations. Writes, tests, and documents technical work products (e.g., code, scripts, processes) according to organizational standards and practices Devotes time to raising the quality and craftsmanship of products and systems Conducts root cause analysis to identify domain level problems and prescribes action items to mitigate Designs self-contained systems within a team's domain, and leads implementations of significant capabilities in existing systems Coaches team members in the execution of techniques to improve reliability, resiliency, security, and performance Decomposes intricate and interconnected designs into implementations that can be effectively built and maintained by less experienced engineers Anticipates trouble areas in systems under development and guides the team in instrumentation practices to ensure observability and supportability Defines test suites and instrumentation that ensures targets for latency and availability are being consistently met in production Leads through example by prioritizing the closure of open vulnerabilities Evaluates potential attack surfaces in systems under development, identifies best practices to mitigate, and guides teams in their implementation Leads team in the identification of small batches of work to deliver the highest value quickly Ensures reuse is a first-class consideration in all team implementations and is a passionate advocate for broad reusability Formally mentors teammates and helps guide them to and along needed learning journeys Observes their environment and identifies opportunities for introducing new approaches to problems All you will need for success: Minimum Qualifications - Education & Prior Job Experience: Bachelor's degree in Computer Science, Computer Engineering, Technology, Information Systems (CIS / MIS), Engineering or related technical discipline, or equivalent experience / training 3+ years of experience designing, developing, and implementing large-scale solutions in production environments Master's degree in Computer Science, Computer Engineering, Technology, Information Systems (CIS / MIS), Engineering or related technical discipline, or equivalent experience / training Preferred Qualifications - Education & Prior Job Experience: Airline Industry experience Mandatory Skills: Java / Python, Selenium / TestNG / Postman, Load Runner (load testing/ Performance monitoring) Skills, Licenses & Certifications: Proficiency with the following technologies: Programming Languages: Java, Python, C#, Javascript / Typescript Frameworks: Spring / Spring Boot, FastAPI Front End Technologies: Angular / React Deployment Technologies: Kubernetes, Docker Source Control: GitHub, Azure DevOps CICD: GitHub Actions, Azure DevOps Data management: PostgreSQL, MongoDB, Redis Integration / APIs Technologies: Kafka, REST, GraphQL Cloud Providers such as Azure and AWS Test Automation: Selenium, TestNG, Postman, SonarQube, Cypress, JUnit / NUnit / PyTest, Cucumber, Playwright, Wiremock / Mockito / Moq Ability to optimize solutions for performance, resiliency and reliability while maintaining an eye toward simplicity Ability to concisely convey ideas verbally, in writing, in code, and in diagrams Proficiency in object-oriented design techniques and principles Proficiency in Agile methodologies, such as SCRUM Proficiency in DevOps Toolchain methodologies, including Continuous Integration and continuous deployment Language, Communication Skills, & Physical Abilities: Ability to effectively communicate both verbally and written with all levels within the organization Physical ability necessary to safely and successfully perform the essential functions of the position, with or without any legally required reasonable accommodations that do not pose an undue hardship. Note: If the Company has reason to question an employee’s physical ability to safely and/or successfully perform the position’s essential job functions, the HR team generally will engage in an interactive process to determine whether a reasonable accommodation is appropriate. HR (working with the operation) ordinarily first speaks with the team member directly and they mutually identify the physical demands of the job that are or may be impacted by the employee’s obvious or known condition. Then, if necessary, HR would request medical documentation from the team member’s treating physician or others to confirm the employee’s ability to perform those essential job functions safely and successfully. Show more Show less
Posted 11 hours ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
We're looking for a Software Engineer - .Net This role is Office Based, Hyderabad Office As a Software Engineer, you will be designing and delivering solutions that scale to meet the needs of some of the largest and most innovative organizations in the world. You will work with team members to understand and exceed the expectations of users, constantly pushing the technical envelope, and helping Cornerstone deliver great results. Working in an agile software development framework focused on development sprints and regular release cycles, you’ll own the complete feature story and mentor juniors. In this role you will… Develop, maintain and enhance .NET applications and services to contribute to our legacy and cloud platform. Analyze product and technical user stories and convey technical specifications in a concise and effective manner Code & deliver a working product, with a ‘first time right’ approach. Participate in release planning, sprint planning, and technical design reviews; provide input as appropriate Partner with engineers, product managers, and other team members as appropriate Develop and maintain thorough knowledge and understanding of products Participate in key architectural decisions and design considerations Troubleshoot complex production issues and provide detailed RCA. Interact with US based Engineering, Product and Release teams as necessary You Have What It Takes If You Have… Bachelor’s or master’s degree in Computer Science or related field. 3+ years of experience with active hands-on development experience in object-oriented languages like, C#, Java or others. Experience developing Microservices, RESTful services, or other SOA development experience (preferably AWS) Exposure to ORM’s like Entity Framework, Nhibernate or similar. Strong TDD approach and hands on experience on tools like Nunit, xUnit or any other testing tools. Strong in OOP and SOLID design principles. Understand basic AWS core services and basic architecture best practices. Experience in working on projects with public cloud providers like Amazon Web Services, Azure, Google Cloud, etc. Exposure to modern java script frameworks. Highly efficient data persistent design techniques. Strong understanding of data retrieval performance (queries, caching). Able to optimize designs/queries for scale. Proficient experience with relational databases such as Microsoft SQL Server/Postgres. Exposure to other non-relational DBs like MongoDB is a plus! Good understanding on how to deal with concurrency and parallel work streams. Should have work experience in Agile SCRUM. Should be very good at analyzing and Debugging/Troubleshooting functional and technical issues. Should have good insight on Performance/Optimization techniques. Good understanding on secure development practices and proactively codes to avoid security issues. Able to resolve all findings. Excellent analytical, quantitative and problem-solving abilities Conversant in algorithms, software design patterns, and their best usage. Self-motivated, requiring minimal oversight. Good team player with the ability to handle multiple concurrent priorities in a fast-paced environment. Strong interpersonal, written, and oral communication skills. Passion for continuous process and technology improvement Extra dose of awesomeness if you have… Experience with AWS Our Culture Spark Greatness. Shatter Boundaries. Share Success. Are you ready? Because here, right now – is where the future of work is happening. Where curious disruptors and change innovators like you are helping communities and customers enable everyone – anywhere – to learn, grow and advance. To be better tomorrow than they are today. Who We Are Cornerstone powers the potential of organizations and their people to thrive in a changing world. Cornerstone Galaxy, the complete AI-powered workforce agility platform, meets organizations where they are. With Galaxy, organizations can identify skills gaps and development opportunities, retain and engage top talent, and provide multimodal learning experiences to meet the diverse needs of the modern workforce. More than 7,000 organizations and 100 million+ users in 180+ countries and in nearly 50 languages use Cornerstone Galaxy to build high-performing, future-ready organizations and people today. Check us out on LinkedIn , Comparably , Glassdoor , and Facebook ! Show more Show less
Posted 11 hours ago
4.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Role : Automation Anywhere Developer Location : Hyd/ Bangalore Notice : Immediate to 15 Days Experience : 4 to 10+ Years No. of Roles : 3 Common JD: A highly skilled and result-oriented RPA Lead with 4+ years of experience in designing, developing, and deploying automation solutions using Automation Anywhere (A360 and v11.x) . Adept at automating end-to-end SAP business processes across modules like MM, SD, FI, PM , and IS-U , with proven capabilities in integrating SAP systems using GUI scripting, BAPI, and IDoc-based mechanisms. Known for delivering scalable, secure, and reusable bots that optimize operational efficiency in enterprise environments. Domain Expertise: SAP ERP Automation (ECC & S/4HANA) Energy / Utilities Sector Meter-to-Cash (M2C) Procure-to-Pay (P2P) Asset Maintenance & Lifecycle Financial Reconciliation & Regulatory Reporting Technical Skills & Tools: RPA Tools & Platforms: Automation Anywhere A360 (Primary) Experience with Automation Anywhere v11.x (Migration/Support) IQ Bot / Document Automation for semi-structured document processing Bot Insight for analytics and monitoring Control Room : Deployment, bot scheduling, role management SAP Automation Expertise: SAP GUI Automation (via GUI Scripting) BAPI / RFC Integration using API/Web Services IDoc monitoring and processing SAP Fiori Web Automation (XPath, DOM model, etc.) Hands-on with SAP modules: MM, SD, FI, PM, HR, IS-U Scripting & Integration: VBScript , JavaScript , Python – for custom logic in bots RESTful API integration with external systems Excel macros, CSV, JSON/XML data parsing Development Tools: Automation Anywhere Bot Editor Visual Studio Code , Notepad++ Postman – for API testing Git – for version control (optional but beneficial) Other Technologies: MS Excel , Outlook , SharePoint OCR Engines – Tesseract (via IQ Bot), ABBYY (basic familiarity) Experience with Jira , ServiceNow , or Azure DevOps for ticketing & tracking Roles & Responsibilities: Developed and deployed RPA bots to automate repetitive and rule-based SAP processes. Automated workflows involving SAP transactions (FB60, ME21N, VL10B, IW31, etc.). Interfaced bots with SAP Business APIs and leveraged IDocs for backend transactions. Designed reusable components for master data updates, invoice processing, and purchase order creation. Handled SAP login, session management, and bot exception handling. Worked on IQ Bot for invoice OCR automation integrated with SAP MM module. Collaborated with SAP functional teams and business users to gather process requirements. Created bot documentation, deployment runbooks, and post-production support guides. Certifications: Automation Anywhere Advanced RPA Professional (A360) Soft Skills: Strong problem-solving and analytical skills Good verbal and written communication Agile and collaborative working style Attention to detail and process-driven mindset Show more Show less
Posted 11 hours ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Position Summary SAP SuccessFactors Functional Consultant - CL3 (US -USI Deloitte Technology – Product Engineering) Responsibilities Support the Deloitte Core HR & SuccessFactors applications. Work with team members and Product Owners to analyze, recommend, plan, design, develop, and implement solutions to meet strategic, usability, performance, reliability, control, and security requirements. Support and coordinate the efforts of Subject Matter Experts, Development, Quality Assurance, Usability, Training, Transport Management, and other internal resources for the successful implementation of system enhancements and fixes. Perform SAP HR & SuccessFactors configuration as required. Understand user requirements and create functional specifications with the required details. Perform Integration and Acceptance Testing. Team with customers to encourage business process improvements, to identify data integrity issues, and to implement proactive solutions. Interact with development team on a regular basis - suggest and implement innovative ideas and solutions; ensure process adherence. Communicate effectively with clients and project team to ensure the requirements are met and recommend better solutions wherever applicable during the project. Advance the goals and objectives of Product Engineering by providing cost-efficient, high quality, client-focused solutions, according to established policies, procedures, and processes of the department and Firm. The Team: US Deloitte Technology Product Engineering has modernized software and product delivery, creating a scalable, cost-effective model that focuses on value/outcomes that leverages a progressive and responsive talent structure. As Deloitte’s primary internal development team, Product Engineering delivers innovative digital solutions to businesses, service lines, and internal operations with proven bottom-line results and outcomes. It helps power Deloitte’s success. It is the engine that drives Deloitte, serving many of the world’s largest, most respected companies. We develop and deploy cutting-edge internal and go-to-market solutions that help Deloitte operate effectively and lead in the market. Our reputation is built on a tradition of delivering with excellence. Qualifications and Required Skills: Educational Background A bachelor’s degree in computer science, or a related discipline. An advanced degree (e.g., MS) is preferred but not required. Experience is the most relevant factor. Experience 3+ years of experience in SAP HR & SuccessFactors module. Required At least 3 to 4 years of effective expertise with configuring and implementing SAP SuccessFactors Employee Central, Onboarding, Succession Management modules of HCM. Strong functional knowledge of the system Minimum of 1 Full Life-cycle implementations of SAP SuccessFactors, from planning to configuration through go-live Certification in at least one SuccessFactors module Knowledge in Data migrations & integrations to SAP CPI (Cloud Platform Integrations) Translate the Business KPIs into functional specifications. Experience leading teams and driving their work to ensure project timelines are met. Experience managing work streams, including monitoring for project issues and sound judgement for escalation. Strong verbal and written communication skills, with an ability to express complex business concepts in non-technical terms. Customer focused and be able to effectively communicate with a variety of groups. Excellent verbal and written communication skills. Strong analytical, problem solving, multi-tasking and interpersonal skills are required. Work location: Bengaluru or Hyderabad Preferred: Experience in managing integration platforms. Experience working with SAFe, Scrum or Agile development processes. Minimum of 1 Full Life-cycle implementations of SAP SuccessFactors, from planning to configuration through go-live Experience on AMS or supported Employee Central projects Certification in at least one SuccessFactors module Excellent verbal and written communication skills. Prior experience in any of the cloud services such as Azure, AWS, GCP, etc. Primary Technologies SAP SuccessFactors Employee Central SAP R/3 and SAP S/4 platforms SuccessFactors API Knowledge How You will Grow: At Deloitte, our professional development plans focus on helping people at every level of their career to identify and use their strengths to do their best work every day and excel in everything they do. SAP CI Specialist (US -USI Deloitte Technology – Product Engineering) Responsibilities Review detailed systems design document and Design consistent, extensible, and integrated data access enterprise components across distinct data sources and platforms. Experience with technical programming; capable of managing and overseeing design, programming, and implementing various technical solutions. Supports creation of interface specifications and call flows for the various components in the design Architect, design, implement world-class products and solutions. Develop functional architecture design and contribute to product vision. Design long-term, reliable, and end-to-end technical architectures. Develop functional architecture design and contribute to product vision. Provide technical support for analysis of business requirements and applicability to the current or planned Platform/Enabler/API Platform capabilities. Serve as a technical liaison between the business, project team and developers/tester. Support and coordinate the efforts of Subject Matter Experts, Development, Quality Assurance, Usability, Training, Transport Management, and other internal resources for the successful implementation of system enhancements and fixes. The Team: US Deloitte Technology Product Engineering has modernized software and product delivery, creating a scalable, cost-effective model that focuses on value/outcomes that leverages a progressive and responsive talent structure. As Deloitte’s primary internal development team, Product Engineering delivers innovative digital solutions to businesses, service lines, and internal operations with proven bottom-line results and outcomes. It helps power Deloitte’s success. It is the engine that drives Deloitte, serving many of the world’s largest, most respected companies. We develop and deploy cutting-edge internal and go-to-market solutions that help Deloitte operate effectively and lead in the market. Our reputation is built on a tradition of delivering with excellence. Qualifications and Required Skills: Educational Background A bachelor’s degree in computer science, or a related discipline. An advanced degree (e.g., MS) is preferred but not required. Experience is the most relevant factor. Experience 6-8 years Industry experience, in developing and architecture of software platforms, products. Required: At least 4 to 6 years of relevant expertise in SAP CPI/CI Experience with Cloud Integration, API management, Webservices and any other integration tool. Experience working with SAP, SuccessFactors, CI middleware or related products. Working Knowledge in Groovy, XSLT, Java Script, and Message mapping. Integrating SAP HR and SuccessFactors with internal/external systems Working knowledge service-oriented architecture (SOA) Working knowledge Enterprise application integration (EAI) having a working knowledge in cockpit, different pallet functions, standard and custom Iflows and admin-related tasks is crucial. Should have knowledge to effectively manage and optimize connections between SAP & non SAP systems Preferred: Experience in managing integration platforms. Minimum of 2 Full Life-cycle implementations of SAP SuccessFactors, from planning to configuration through go-live Certification in at least one CPI or SuccessFactors Employee Central module Excellent verbal and written communication skills. Primary Technologies Middleware (Cloud-Platform Integration and API Management) SAP SuccessFactors EC SAP R/3 platform OData Services. Work Location: Hyderabad Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Professional development From entry-level employees to senior leaders, we believe there’s always room to learn. We offer opportunities to build new skills, take on leadership opportunities and connect and grow through mentorship. From on-the-job learning experiences to formal development programs, our professionals have a variety of opportunities to continue to grow throughout their career. Requisition code: 214324 Show more Show less
Posted 11 hours ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Overview Seeking an Associate Manager, Data Operations, to support our growing data organization. In this role, you will assist in maintaining data pipelines and corresponding platforms (on-prem and cloud) while working closely with global teams on DataOps initiatives. Support the day-to-day operations of data pipelines, ensuring data governance, reliability, and performance optimization on Microsoft Azure. Hands-on experience with Azure Data Factory (ADF), Azure Synapse Analytics, Azure Databricks, and real-time streaming architectures is preferred. Assist in ensuring the availability, scalability, automation, and governance of enterprise data pipelines supporting analytics, AI/ML, and business intelligence. Contribute to DataOps programs, aligning with business objectives, data governance standards, and enterprise data strategy. Help implement real-time data observability, monitoring, and automation frameworks to improve data reliability, quality, and operational efficiency. Support the development of governance models and execution roadmaps to enhance efficiency across Azure, AWS, GCP, and on-prem environments. Work on CI/CD integration, data pipeline automation, and self-healing capabilities to improve enterprise-wide DataOps processes. Collaborate with cross-functional teams to support and maintain next-generation Data & Analytics platforms while promoting an agile and high-performing DataOps culture. Assist in the adoption of Data & Analytics technology transformations, ensuring automation for proactive issue identification and resolution. Partner with cross-functional teams to support process improvements, best practices, and operational efficiencies within DataOps. Responsibilities Assist in the implementation and optimization of enterprise-scale data pipelines using Azure Data Factory (ADF), Azure Synapse Analytics, Azure Databricks, and Azure Stream Analytics. Support data ingestion, transformation, orchestration, and storage workflows, ensuring data reliability, integrity, and availability. Help ensure seamless batch, real-time, and streaming data processing, focusing on high availability and fault tolerance. Contribute to DataOps automation efforts, including CI/CD for data pipelines, automated testing, and version control using Azure DevOps and Terraform. Collaborate with Data Engineering, Analytics, AI/ML, CloudOps, and Business Intelligence teams to support data-driven decision-making. Assist in aligning DataOps practices with regulatory and security requirements by working with IT, data stewards, and compliance teams. Support data operations and sustainment activities, including testing and monitoring processes for global products and projects. Participate in data capture, storage, integration, governance, and analytics efforts, working alongside cross-functional teams. Assist in managing day-to-day DataOps activities, ensuring adherence to service-level agreements (SLAs) and business requirements. Engage with SMEs and business stakeholders to ensure data platform capabilities align with business needs. Contribute to Agile work intake and execution processes, helping to maintain efficiency in data platform teams. Help troubleshoot and resolve issues related to cloud infrastructure and data services in collaboration with technical teams. Support the development and automation of operational policies and procedures, improving efficiency and resilience. Assist in incident response and root cause analysis, contributing to self-healing mechanisms and mitigation strategies. Foster a customer-centric approach, advocating for operational excellence and continuous improvement in service delivery. Help build a collaborative, high-performing team culture, promoting automation and efficiency within DataOps. Adapt to shifting priorities and support cross-functional teams in maintaining productivity and achieving business goals. Utilize technical expertise in cloud and data operations to support service reliability and scalability. Qualifications 5+ years of technology work experience in a large-scale global organization, with CPG industry experience preferred. 5+ years of experience in Data & Analytics roles, with hands-on expertise in data operations and governance. 2+ years of experience working within a cross-functional IT organization, collaborating with multiple teams. Experience in a lead or senior support role, with a focus on DataOps execution and delivery. Strong communication skills, with the ability to collaborate with stakeholders and articulate technical concepts to non-technical audiences. Analytical and problem-solving abilities, with a focus on prioritizing customer needs and operational improvements. Customer-focused mindset, ensuring high-quality service delivery and operational efficiency. Growth mindset, with a willingness to learn and adapt to new technologies and methodologies in a fast-paced environment. Experience supporting data operations in a Microsoft Azure environment, including data pipeline automation. Familiarity with Site Reliability Engineering (SRE) principles, such as monitoring, automated issue remediation, and scalability improvements. Understanding of operational excellence in complex, high-availability data environments. Ability to collaborate across teams, building strong relationships with business and IT stakeholders. Basic understanding of data management concepts, including master data management, data governance, and analytics. Knowledge of data acquisition, data catalogs, data standards, and data management tools. Strong execution and organizational skills, with the ability to follow through on operational plans and drive measurable results. Adaptability in a dynamic, fast-paced environment, with the ability to shift priorities while maintaining productivity. Show more Show less
Posted 11 hours ago
4.0 years
0 Lacs
Hyderabad, Telangana, India
Remote
Role: Business Intelligence Engineer Experience Level: 4years to 6years Working Mode: Remote Employment Period: 6months Contract (Full Time) ** Not a part time job Job Description : We are looking for a Business Intelligence Enginee r with expertise in Data Warehousing & Architecture. Experience in Web Analytics and Salesforce CRM data is a plus. The ideal candidate will have strong skills in SQL, Python, Microsoft Fabric, Microsoft Azure, and experience in managing data warehouse and data architecture projects to solve complex data challenges. Key Responsibilities : Develop and maintain scalable data architectures. Lead data analytics projects using SQL, Python, Microsoft Fabric, and Microsoft Azure. Manage Google Analytics (GA4 and Universal Analytics) and Adobe Analytics integrations. Design and implement data models and databases. Analyze large datasets to uncover patterns and improve business performance. Ensure data quality and reliability. Work with stakeholders to deliver data-driven solutions. Qualifications : Expertise in Salesforce CRM, Salesforce Marketing Cloud Data, and Google Analytics (GA4). Strong experience in data warehousing in enterprise environment and successful project delivery. Minimum of 2 years’ experience in SQL and Python. Familiarity with Microsoft Fabric and Azure. Strong analytical, problem-solving, and communication skills. Leadership experience and ability to mentor team members. Why Join Us : Work with industry experts. Competitive salary and benefits. Innovative and dynamic work environment. Career growth opportunities. Interested candidates should send their resume and cover letter to vaheda.rahamman@mafgroup.co.uk. We look forward to seeing how you can help drive our success! Show more Show less
Posted 11 hours ago
8.0 - 15.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
TCS Hiring for Azure Cloud Architect (Platform)_PAN India Experience: 8 to 15 Years Only Job Location: PAN India TCS Hiring for Azure Cloud Architect (Platform)_PAN India Required Technical Skill Set: Proven experience as a Solution Architect with a focus on Microsoft Azure. Good knowledge on Application development and migration Knowledge in Java or .Net Strong knowledge of Azure services: Azure Kubernetes Service (AKS), Azure App Services, Azure Functions, Azure Storage, and Azure DevOps. Experience in cloud-native application development and containerization (Docker, Kubernetes). Proficiency in Infrastructure as Code (IaC) tools (e.g., Terraform, ARM templates, Bicep). Strong knowledge of Azure Active Directory, identity management, and security best practices. Hands-on experience with CI/CD processes and DevOps practices. Knowledge of networking concepts in Azure (VNets, Load Balancers, Firewalls). Excellent communication and stakeholder management skills. Key Responsibilities: Design end-to-end cloud solutions leveraging Microsoft Azure services. Develop architecture and solution blueprints that align with business objectives. Lead cloud adoption and migration strategies. Collaborate with development, operations, and security teams to implement best practices. Ensure solutions meet performance, scalability, availability, and security requirements. Optimize cloud cost and performance. Oversee the deployment of workloads on Azure using IaaS, PaaS, and SaaS services. Implement CI/CD pipelines, automation, and infrastructure as code (IaC). Stay updated on emerging Azure technologies and provide recommendations. Kind Regards, Priyankha M Show more Show less
Posted 11 hours ago
162.0 years
0 Lacs
Pune, Maharashtra, India
On-site
About Birlasoft: Birlasoft, a powerhouse where domain expertise, enterprise solutions, and digital technologies converge to redefine business processes. We take pride in our consultative and design thinking approach, driving societal progress by enabling our customers to run businesses with unmatched efficiency and innovation. As part of the CKA Birla Group, a multibillion-dollar enterprise, we boast a 12,500+ professional team committed to upholding the Group's 162-year legacy. Our core values prioritize Diversity, Equity, and Inclusion (DEI) initiatives, along with Corporate Sustainable Responsibility (CSR) activities, demonstrating our dedication to building inclusive and sustainable communities. Join us in shaping a future where technology seamlessly aligns with purpose. About the Job – We are looking for a highly experienced Senior Developer with a strong background in Python, AI/ML, Generative AI, and experience with Azure OpenAI, Large Language Models (LLMs), cloud platforms like Azure and AWS, and various databases to join our dynamic team. The ideal candidate will have a proven track record of developing and deploying advanced AI solutions, with a focus on leveraging Generative AI techniques to drive innovation and efficiency. Job Title - Technical Specialist Location: Pune Educational Background: Bachelor's degree in Computer Science, Information Technology, or related field. Key Responsibilities - AI/ML Development: Design, develop, and implement advanced AI/ML models and algorithms to solve complex problems and enhance business processes. Generative AI Solutions: Utilize Generative AI techniques (e.g., GANs, VAEs) to create innovative applications and improve existing systems. Python Programming: Write clean, efficient, and scalable code in Python, using libraries such as TensorFlow, PyTorch, scikit-learn, and others. Data Analysis and Modeling: Analyze large datasets to extract insights, build predictive models, and support data-driven decision-making. Azure OpenAI and LLMs: Develop and deploy AI solutions using Azure OpenAI services and Large Language Models (LLMs) to enhance capabilities and performance. Cloud Platforms: Utilize cloud platforms like Azure and AWS for deploying and managing AI/ML solutions. Database Management: Work with various databases, including SQL, MongoDB, NoSQL, and vector databases, to store, manage, and retrieve data efficiently. Collaboration: Work closely with cross-functional teams, including data scientists, engineers, and product managers, to understand requirements and deliver high-quality AI solutions. Mentorship: Provide technical guidance and mentorship to junior developers and team members. Continuous Improvement: Stay updated with the latest advancements in AI/ML, Generative AI, Azure OpenAI, LLMs, cloud technologies, and database management, and apply them to enhance existing solutions and develop new ones. Documentation: Document AI models, algorithms, and processes, and provide regular reports on project progress and outcomes. Required Qualifications: Education: Bachelor's or Master's degree in Computer Science, Engineering, or a related field. Experience: Minimum of 5-6 years of experience in AI/ML development, with a focus on Python, Generative AI, and Azure OpenAI. Technical Skills: Proficiency in Python and relevant libraries (TensorFlow, PyTorch, scikit-learn, etc.). Extensive experience with Generative AI techniques (GANs, VAEs, etc.). Strong understanding of machine learning algorithms, data analysis, and model deployment. Experience with Azure OpenAI services and Large Language Models (LLMs). Proficiency in cloud platforms like Azure and AWS. Experience with databases such as SQL, MongoDB, NoSQL, and vector databases. Familiarity with containerization (Docker, Kubernetes). Soft Skills: Excellent problem-solving skills and attention to detail. Strong communication and collaboration skills. Ability to lead and mentor a team. Proactive and self-motivated with a passion for innovation. Show more Show less
Posted 11 hours ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Reference # 312920BR Job Type Full Time Your role Do you want to design and build next generation business applications using the latest technologies? Are you confident at iteratively refining user requirements and removing any ambiguity? Are you motivated to work in a complex, divers and global environment? We’re looking for a software engineer to: provide technology solutions that will solve business problems and strengthen our position as a digital leader in financial services analyse business requirements for the Compute Platform Team design, plan and deliver sustainable solutions using modern programming languages providing technical expertise and recommendations in assessing new software projects and initiatives to support and enhance our existing applications conduct code reviews and test software as needed, along with participating in application architecture and design and other phases of SDLC see that proper operational controls and procedures are implemented to process move from test to production cooperate with other groups in engineering on delivery of large scale programs maintain & improve existing deployment and build mechanisms Your team Compute Platform is a global organization within Distributed Hosting providing technology platforms to underpin our partners business applications. You will be part of the Compute Platform Team, which has a global footprint and works with clients and wider team members spread across the world. Together we drive consistency across business divisions and optimize operations and support costs by providing timely, robust, cost effective solutions and products to our clients. You’ll be working in the IAAS Engineering Team, as part of the Technology division in Pune. As an Infrastructure Tech Engineer, you’ll play an important role in engineering the best in class Infrastructure As A Service provisioning techniques, Cloud & Hybrid IAAS landing zone solutions, use modern tooling to provide infrastructure observability, work with other teams like Operating System, Middleware, Storage, Database, Containers to introduce more automated ways of deploying, maintaining and integrating IAAS into the our echo system. Team work is pivotal to our success, it plays a very important role. We offer flexibility in the workplace and equal opportunities to all our team members. Your expertise confidence and mature experience in developing solutions for IAAS at large scale very good technical writing skills (runbooks, staging guides) experience of HPE, Dell, Lenovo, ESXi & HyperV technologies experience in Microsoft IAAS migration technologies and knowledge of programming in Azure capable languages, such as ARM, PowerShell, yaml and a understanding of Azure API's knowledge of SDLC and development methodologies / processes ability to effectively interact with a range of people throughout the organization of all levels a committed and visionary team player with analytical and logical mind-set (you can solve problems like nobody’s business) fluency in English and being eager to work in an Global Team About Us UBS is the world’s largest and the only truly global wealth manager. We operate through four business divisions: Global Wealth Management, Personal & Corporate Banking, Asset Management and the Investment Bank. Our global reach and the breadth of our expertise set us apart from our competitors.. We have a presence in all major financial centers in more than 50 countries. How We Hire We may request you to complete one or more assessments during the application process. Learn more Join us At UBS, we embrace flexible ways of working when the role permits. We offer different working arrangements like part-time, job-sharing and hybrid (office and home) working. Our purpose-led culture and global infrastructure help us connect, collaborate, and work together in agile ways to meet all our business needs. From gaining new experiences in different roles to acquiring fresh knowledge and skills, we know that great work is never done alone. We know that it's our people, with their unique backgrounds, skills, experience levels and interests, who drive our ongoing success. Together we’re more than ourselves. Ready to be part of #teamUBS and make an impact? Disclaimer / Policy Statements UBS is an Equal Opportunity Employer. We respect and seek to empower each individual and support the diverse cultures, perspectives, skills and experiences within our workforce. Show more Show less
Posted 11 hours ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
Remote
Overview Primary focus would be to perform development work within Azure Data Lake environment and other related ETL technologies, with the responsibility of ensuring on time and on budget delivery; Satisfying project requirements, while adhering to enterprise architecture standards. This role will also have L3 responsibilities for ETL processes Responsibilities Delivery of key Azure Data Lake projects within time and budget Contribute to solution design and build to ensure scalability, performance and reuse of data and other components Delivery of key Azure Data Lake projects within time and budget Ensure on time and on budget delivery which satisfies project requirements, while adhering to enterprise architecture standards. Possess strong problem-solving abilities with a focus on managing to business outcomes through collaboration with multiple internal and external parties Enthusiastic, willing, able to learn and continuously develop skills and techniques - enjoys change and seeks continuous improvement A clear communicator both written and verbal with good presentational skills, fluent and proficient in the English language Customer focused and a team player Qualifications Bachelor’s degree in Computer Science, MIS, Business Management, or related field 5+ years’ experience in Information Technology 4+ years’ experience in Azure Data Lake Bachelor’s degree in Computer Science, MIS, Business Management, or related field Technical Skills : Proven experience development activities in Data, BI or Analytics projects Solutions Delivery experience - knowledge of system development lifecycle, integration, and sustainability Strong knowledge of Pyspark and SQL Good knowledge of Azure data factory or Databricks Knowledge of Presto / Denodo is desirable Knowledge of FMCG business processes is desirable Non-Technical Skills : Excellent remote collaboration skills Experience working in a matrix organization with diverse priorities Exceptional written and verbal communication skills along with collaboration and listening skills Ability to work with agile delivery methodologies Ability to ideate requirements & design iteratively with business partners without formal requirements documentation Show more Show less
Posted 11 hours ago
8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
People Tech Group is a leading Enterprise Solutions, Digital Transformation, Data Intelligence, and Modern Operation services provider. We have started in the year 2006 at Redmond, Washington, USA and got expanded to India, and In India, we are based out of Hyderabad, Bangalore, Pune and Chennai with overall strength of 3000+ employees. We have our presence over 4 different countries US/Canada/India /Costa Rica. One of the Recent Development happened with the company, we have got acquired by Quest Global Company, Quest Global is One of the world's largest engineering Solution provider Company, It has 20,000+ employee strength, with 70+ Global Delivery Service centers, Headquarters are based in Singapore. Going forward, we all are part of Quest Global Company. Job Role: Technical Project Manager_ Full Time Opportunity _ People Tech Group Experience: 8 + Years Notice Period: Immediate-15 Days Location: Hyderabad Technical Project Manager (Software Development) We’re looking for a seasoned Technical Project Manager with a strong development background to lead complex software projects. This role blends hands-on technical leadership with project delivery, stakeholder management, and Agile execution. Key Responsibilities: Lead full SDLC project execution from planning to post-release. Collaborate with cross-functional teams (Dev, QA, UI/UX, DevOps). Translate business needs into technical solutions. Manage project plans, timelines, budgets, and risks. Drive Agile ceremonies and ensure engineering best practices. Requirements: 8–12 years in IT, with 4+ years in technical/project leadership. Development experience in Java/.NET/Python/Node.js. Familiarity with front-end frameworks and cloud platforms (AWS/Azure/GCP). Strong Agile/DevOps delivery experience. Excellent communication and stakeholder management skills. Preferred: Certifications: PMP, CSM, PMI-ACP, etc. Show more Show less
Posted 11 hours ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description for FinOps Practitioner Exp- 10 to 14 Yrs, Location - Noida and Hyderabad. – AWS/AZURE/GCP With “cloud-first” strategy the organization need to improve the processes in workload management as the procurement of resources in the cloud is not done by the Finance team alone, as in the past, but by Cloud Engineers & Product Owners. Adopting FinOps principles has helped to create visibility of the cloud spend, optimize the resource usage and longer-term commitments to get discounted rates for our workloads. The candidate should have the following skills : Manage cost visibility of public cloud platform for AWS/AZURE/GCP Monitor cloud spend and create budget alerts Review & recommend the FinOps tool Facilitate the implementation of FinOps tool Conduct periodic reports and regular reviews of cloud spend with Google Cloud Billing Console & with other FinOps tool Manage cloud commitments (CUD, Saving Plans, RI) & suggest use of Preemptible or Spot instances, wherever suitable Identify unused resources and schedule decommissioning Optimize existing resources by rightsizing instances Optimize architecture by migrating to scalable resources Define the FinOps framework and roadmap Support Finance for budgeting forecast and Enterprise agreements with providers Become the bridge between Finance, Product Owners & Cloud Engineers Advocate FinOps principles in day-to-day operations & induce FinOps culture within the stakeholders Requirements Bachelor’s degree holder in Computer Science, Information Technology or other relevant fields At least 5 years of experience on public cloud platforms and at least 2 years of exposure to AWS/AZURE/GCP billing management FinOps Certified Practitioner is a must. Associate or Professional level certified candidate in AWS/AZURE/GCP is a plus. Good understanding of AWS/AZURE/GCP Billing methodology, Organization & Project structure Good understanding of instance types, storage types & of other AWS/AZURE/GCP services Good understanding of cost drivers for cloud resources Capable to consolidate data and deliver aggregate view/report Understanding of variable cost models for cloud resources Possess moderate verbal and written communication skills to work effectively with technical and nontechnical personnel at various levels in the organization and with vendors Good understanding of MS excel, PowerPoint and any other presentation application. Understanding of PowerBI reports. Show more Show less
Posted 11 hours ago
6.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Overview In this role, we are seeking an Associate Manager - Offshore Program & Delivery Management to oversee program execution, governance, and service delivery across DataOps, BIOps, AIOps, MLOps, Data IntegrationOps, SRE, and Value Delivery programs. This role requires expertise in offshore execution, cost optimization, automation strategies, and cross-functional collaboration to enhance operational excellence. Manage and support DataOps programs, ensuring alignment with business objectives, data governance standards, and enterprise data strategy. Assist in real-time monitoring, automated alerting, and self-healing mechanisms to improve system reliability and performance. Contribute to the development and enforcement of governance models and operational frameworks to streamline service delivery and execution roadmaps. Support the standardization and automation of pipeline workflows, report generation, and dashboard refreshes to enhance efficiency. Collaborate with global teams to support Data & Analytics transformation efforts and ensure sustainable, scalable, and cost-effective operations. Assist in proactive issue identification and self-healing automation, enhancing the sustainment capabilities of the PepsiCo Data Estate. Responsibilities Support DataOps and SRE operations, assisting in offshore delivery of DataOps, BIOps, Data IntegrationOps, and related initiatives. Assist in implementing governance frameworks, tracking KPIs, and ensuring adherence to operational SLAs. Contribute to process standardization and automation efforts, improving service efficiency and scalability. Collaborate with onshore teams and business stakeholders, ensuring alignment of offshore activities with business needs. Monitor and optimize resource utilization, leveraging automation and analytics to improve productivity. Support continuous improvement efforts, identifying operational risks and ensuring compliance with security and governance policies. Assist in managing day-to-day DataOps activities, including incident resolution, SLA adherence, and stakeholder engagement. Participate in Agile work intake and management processes, contributing to strategic execution within data platform teams. Provide operational support for cloud infrastructure and data services, ensuring high availability and performance. Document and enhance operational policies and crisis management functions, supporting rapid incident response. Promote a customer-centric approach, ensuring high service quality and proactive issue resolution. Assist in team development efforts, fostering a collaborative and agile work environment. Adapt to changing priorities, supporting teams in maintaining focus on key deliverables. Qualifications 6+ years of technology experience in a global organization, preferably in the CPG industry. 4+ years of experience in Data & Analytics, with a foundational understanding of data engineering, data management, and operations. 3+ years of cross-functional IT experience, working with diverse teams and stakeholders. 1-2 years of leadership or coordination experience, supporting team operations and service delivery. Strong communication and collaboration skills, with the ability to convey technical concepts to non-technical audiences. Customer-focused mindset, ensuring high-quality service and responsiveness to business needs. Experience in supporting technical operations for enterprise data platforms, preferably in a Microsoft Azure environment. Basic understanding of Site Reliability Engineering (SRE) practices, including incident response, monitoring, and automation. Ability to drive operational stability, supporting proactive issue resolution and performance optimization. Strong analytical and problem-solving skills, with a continuous improvement mindset. Experience working in large-scale, data-driven environments, ensuring smooth operations of business-critical solutions. Ability to support governance and compliance initiatives, ensuring adherence to data standards and best practices. Familiarity with data acquisition, cataloging, and data management tools. Strong organizational skills, with the ability to manage multiple priorities effectively. Show more Show less
Posted 11 hours ago
10.0 - 12.0 years
0 Lacs
Hyderabad, Telangana, India
Remote
Overview The Software Engineering Manager will play a pivotal role in software development activities and long-term initiative planning and collaboration across the Strategy & Transformation (S&T) organization. Software Engineering is the corner stone of scalable digital transformation across PepsiCo’s value chain. This leader will deliver the end-to-end software development experience, deliver high quality software as part of the DevOps process, and have accountability for our business operations. The leader in this role will be highly experienced Software Engineering Manager and hands-on with Java/Python/Azure technologies to lead the design, development and support of our Integration platform. This role is critical in shaping our integration landscape, establishing development best practices, and mentoring a world-class engineering team. This role will play a key leadership role in a product-focused, high-growth startup/enterprise environment, owning end to end integration services. Responsibilities Support and guide a team of engineers in developing and maintaining Digital Products and Applications (DPA). Oversee the comprehensive development of integration services for the Integration platform utilizing Java and Python on Azure. Design scalable, performant, and secure systems ensuring maintainability and quality. Establish code standards and best practices; conduct code reviews and technical audits. Advise on the selection of tools, libraries, and frameworks. Research emerging technologies and provide recommendations for their adoption. Uphold high standards of Integration services and performance across platforms. Foster partnerships with User Experience, Product Management, IT, Data & Analytics, Emerging Tech, Innovation, and Process Engineering teams to deliver the Digital Products portfolio. Create a roadmap and schedule for implementation based on business requirements and strategy. Demonstrate familiarity with AI tools and platforms such as OpenAI (GPT-3/4, Assistants API), Anthropic, or similar LLM providers. Integrate AI capabilities into applications, including AI copilots and AI agents, smart chatbots, automated data processors, and content generators. Understand prompt engineering, context handling, and AI output refinement. Lead multi-disciplinary, high-performance work teams distributed across remote locations effectively. Build, manage, develop, and mentor a team of engineers. Engage with executives throughout the company to advocate the narrative surrounding software engineering. Expand DPA capabilities through a customer-focused, services-driven digital solutions platform leveraging data and AI to deliver automated and personalized experiences. Manage and appropriately escalate delivery impediments, risks, issues, and changes associated with engineering initiatives to stakeholders. Collaborate with key business partners to recommend solutions that best meet the strategic needs of the business. Qualifications Bachelor's or master's in computer science, engineering, or related field 10-12 years of software design and development (Java, Spring Boot, Python) 8-10 years of Java/Python development, enterprise-grade applications expertise 3-5 years of microservices development and RESTful API design 3-5 years with cloud-native solutions (Azure preferred, AWS, Google Cloud) Strong understanding of web protocols, REST APIs, SOA 3-5 years as lead developer, mentoring teams, driving technical direction Proficient with relational databases (Oracle, MSSQL, MySQL) and NoSQL databases (Couchbase, MongoDB) Exposure to ADF or ADB Experience with Azure Kubernetes Service or equivalent Knowledge of event-driven architecture and message brokers (Kafka, ActiveMQ) Data integration experience across cloud and on-prem systems Deep understanding of CI/CD pipelines, DevOps automation Ability to write high-quality, secure, scalable code Experience delivering mission-critical, high-throughput systems Strong problem-solving, communication, stakeholder collaboration skills Experience in Scaled Agile (SAFe) as technical lead Knowledge of Salesforce ecosystem (Sales Cloud/CRM) is a plus Show more Show less
Posted 11 hours ago
12.0 years
0 Lacs
Hyderabad, Telangana, India
Remote
Overview Deputy Director - Data Engineering PepsiCo operates in an environment undergoing immense and rapid change. Big-data and digital technologies are driving business transformation that is unlocking new capabilities and business innovations in areas like eCommerce, mobile experiences and IoT. The key to winning in these areas is being able to leverage enterprise data foundations built on PepsiCo’s global business scale to enable business insights, advanced analytics, and new product development. PepsiCo’s Data Management and Operations team is tasked with the responsibility of developing quality data collection processes, maintaining the integrity of our data foundations, and enabling business leaders and data scientists across the company to have rapid access to the data they need for decision-making and innovation. What PepsiCo Data Management and Operations does: Maintain a predictable, transparent, global operating rhythm that ensures always-on access to high-quality data for stakeholders across the company. Responsible for day-to-day data collection, transportation, maintenance/curation, and access to the PepsiCo corporate data asset Work cross-functionally across the enterprise to centralize data and standardize it for use by business, data science or other stakeholders. Increase awareness about available data and democratize access to it across the company. As a data engineering lead, you will be the key technical expert overseeing PepsiCo's data product build & operations and drive a strong vision for how data engineering can proactively create a positive impact on the business. You'll be empowered to create & lead a strong team of data engineers who build data pipelines into various source systems, rest data on the PepsiCo Data Lake, and enable exploration and access for analytics, visualization, machine learning, and product development efforts across the company. As a member of the data engineering team, you will help lead the development of very large and complex data applications into public cloud environments directly impacting the design, architecture, and implementation of PepsiCo's flagship data products around topics like revenue management, supply chain, manufacturing, and logistics. You will work closely with process owners, product owners and business users. You'll be working in a hybrid environment with in-house, on-premises data sources as well as cloud and remote systems. Responsibilities Data engineering lead role for D&Ai data modernization (MDIP) Ideally Candidate must be flexible to work an alternative schedule either on tradition work week from Monday to Friday; or Tuesday to Saturday or Sunday to Thursday depending upon coverage requirements of the job. The can didate can work with immediate supervisor to change the work schedule on rotational basis depending on the product and project requirements. Responsibilities Manage a team of data engineers and data analysts by delegating project responsibilities and managing their flow of work as well as empowering them to realize their full potential. Design, structure and store data into unified data models and link them together to make the data reusable for downstream products. Manage and scale data pipelines from internal and external data sources to support new product launches and drive data quality across data products. Create reusable accelerators and solutions to migrate data from legacy data warehouse platforms such as Teradata to Azure Databricks and Azure SQL. Enable and accelerate standards-based development prioritizing reuse of code, adopt test-driven development, unit testing and test automation with end-to-end observability of data Build and own the automation and monitoring frameworks that captures metrics and operational KPIs for data pipeline quality, performance and cost. Collaborate with internal clients (product teams, sector leads, data science teams) and external partners (SI partners/data providers) to drive solutioning and clarify solution requirements. Evolve the architectural capabilities and maturity of the data platform by engaging with enterprise architects to build and support the right domain architecture for each application following well-architected design standards. Define and manage SLA’s for data products and processes running in production. Create documentation for learnings and knowledge transfer to internal associates. Qualifications 12+ years of engineering and data management experience Qualifications 12+ years of overall technology experience that includes at least 5+ years of hands-on software development, data engineering, and systems architecture. 8+ years of experience with Data Lakehouse, Data Warehousing, and Data Analytics tools. 6+ years of experience in SQL optimization and performance tuning on MS SQL Server, Azure SQL or any other popular RDBMS 6+ years of experience in Python/Pyspark/Scala programming on big data platforms like Databricks 4+ years in cloud data engineering experience in Azure or AWS. Fluent with Azure cloud services. Azure Data Engineering certification is a plus. Experience with integration of multi cloud services with on-premises technologies. Experience with data modelling, data warehousing, and building high-volume ETL/ELT pipelines. Experience with data profiling and data quality tools like Great Expectations. Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets. Experience with at least one business intelligence tool such as Power BI or Tableau Experience with running and scaling applications on the cloud infrastructure and containerized services like Kubernetes. Experience with version control systems like ADO, Github and CI/CD tools for DevOps automation and deployments. Experience with Azure Data Factory, Azure Databricks and Azure Machine learning tools. Experience with Statistical/ML techniques is a plus. Experience with building solutions in the retail or in the supply chain space is a plus. Understanding of metadata management, data lineage, and data glossaries is a plus. BA/BS in Computer Science, Math, Physics, or other technical fields. Candidate must be flexible to work an alternative work schedule either on tradition work week from Monday to Friday; or Tuesday to Saturday or Sunday to Thursday depending upon product and project coverage requirements of the job. Candidates are expected to be in the office at the assigned location at least 3 days a week and the days at work needs to be coordinated with immediate supervisor Skills, Abilities, Knowledge: Excellent communication skills, both verbal and written, along with the ability to influence and demonstrate confidence in communications with senior level management. Proven track record of leading, mentoring data teams. Strong change manager. Comfortable with change, especially that which arises through company growth. Ability to understand and translate business requirements into data and technical requirements. High degree of organization and ability to manage multiple, competing projects and priorities simultaneously. Positive and flexible attitude to enable adjusting to different needs in an ever-changing environment. Strong leadership, organizational and interpersonal skills; comfortable managing trade-offs. Foster a team culture of accountability, communication, and self-management. Proactively drives impact and engagement while bringing others along. Consistently attain/exceed individual and team goals. Ability to lead others without direct authority in a matrixed environment. Comfortable working in a hybrid environment with teams consisting of contractors as well as FTEs spread across multiple PepsiCo locations. Domain Knowledge in CPG industry with Supply chain/GTM background is preferred. Show more Show less
Posted 11 hours ago
15.0 years
0 Lacs
Kochi, Kerala, India
On-site
Introduction Joining the IBM Technology Expert Labs teams means you'll have a career delivering world-class services for our clients. As the ultimate expert in IBM products, you'll bring together all the necessary technology and services to help customers solve their most challenging problems. Working in IBM Technology Expert Labs means accelerating the time to value confidently and ensuring speed and insight while our clients focus on what they do best—running and growing their business. Excellent onboarding and industry-leading learning culture will set you up for a positive impact, while advancing your career. Our culture is collaborative and experiential. As part of a team, you will be surrounded by bright minds and keen co-creators—always willing to help and be helped—as you apply passion to work that will positively impact the world around us. Your Role And Responsibilities This Candidate is responsible for: DB2 installation and configuration on the below environments. On Prem Multi Cloud Redhat Open shift Cluster HADR Non-DPF and DPF. Migration of other databases to Db2(eg TERADATA / SNOWFLAKE / SAP/ Cloudera to db2 migration) Create high-level designs, detail level designs, maintaining product roadmaps which includes both modernization and leveraging cloud solutions Design scalable, performant, and cost-effective data architectures within the Lakehouse to support diverse workloads, including reporting, analytics, data science, and AI/ML. Perform health check of the databases, make recommendations and deliver tuning. At the Database and system level. Deploy DB2 databases as containers within Red Hat OpenShift clusters Configure containerized database instances, persistent storage, and network settings to optimize performance and reliability. Lead the architectural design and implementation of solutions on IBM watsonx.data, ensuring alignment with overall enterprise data strategy and business objectives. Define and optimize the watsonx.data ecosystem, including integration with other IBM watsonx components (watsonx.ai, watsonx.governance) and existing data infrastructure (DB2, Netezza, cloud data sources) Establish best practices for data modeling, schema evolution, and data organization within the watsonx.data lakehouse Act as a subject matter expert on Lakehouse architecture, providing technical leadership and guidance to data engineering, analytics, and development teams. Mentor junior architects and engineers, fostering their growth and knowledge in modern data platforms. Participate in the development of architecture governance processes and promote best practices across the organization. Communicate complex technical concepts to both technical and non-technical stakeholders. Required Technical And Professional Expertise 15+ years of experience in data architecture, data engineering, or a similar role, with significant hands-on experience in cloud data platforms Strong proficiency in DB2, SQL and Python. Strong understanding of: Database design and modelling(dimensional, normalized, NoSQL schemas) Normalization and indexing Data warehousing and ETL processes Cloud platforms (AWS, Azure, GCP) Big data technologies (e.g., Hadoop, Spark) Database Migration project experience from one database to another database (target database Db2). Experience in deployment of DB2 databases as containers within Red Hat OpenShift clusters and configure containerized database instances, persistent storage, and network settings to optimize performance and reliability. Excellent communication, collaboration, problem-solving, and leadership skills. Preferred Technical And Professional Experience Experience with machine learning environments and LLMs Certification in IBM watsonx.data or related IBM data and AI technologies Hands-on experience with Lakehouse platform (e.g., Databricks, Snowflake) Having exposure to implement or understanding of DB replication process Experience with integrating watsonx.data with GenAI or LLM initiatives (e.g., RAG architectures). Experience with NoSQL databases (e.g., MongoDB, Cassandra). Experience in data modeling tools (e.g., ER/Studio, ERwin). Knowledge of data governance and compliance standards (e.g., GDPR, HIPAA). soft skills. Show more Show less
Posted 11 hours ago
7.0 years
0 Lacs
Kanayannur, Kerala, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY GDS – Data and Analytics (D&A) – Manager – Azure Data Architect As part of our EY-GDS D&A (Data and Analytics) team, we help our clients solve complex business challenges with the help of data and technology. We dive deep into data to extract the greatest value and discover opportunities in key business and functions like Banking, Insurance, Manufacturing, Healthcare, Retail, Manufacturing and Auto, Supply Chain, and Finance. The opportunity We’re looking for Managers (Big Data Architects) with strong technology and data understanding having proven delivery capability. This is a fantastic opportunity to be part of a leading firm as well as a part of a growing Data and Analytics team. Your Key Responsibilities Develop standardized practices for delivering new products and capabilities using Big Data & cloud technologies, including data acquisition, transformation, analysis, Modelling, Governance & Data management skills Interact with senior client technology leaders, understand their business goals, create, propose solution, estimate effort, build architectures, develop and deliver technology solutions Define and develop client specific best practices around data management within a cloud environment Recommend design alternatives for data ingestion, processing and provisioning layers Design and develop data ingestion programs to process large data sets in Batch mode using ADB, ADF, PySpark, Python, Snypase Develop data ingestion programs to ingest real-time data from LIVE sources using Apache Kafka, Spark Streaming and related technologies Have managed team and have experience in end to end delivery Have experience of building technical capability and teams to deliver Skills And Attributes For Success Strong understanding & familiarity with all Cloud Ecosystem components Strong understanding of underlying Cloud Architectural concepts and distributed computing paradigms Experience in the development of large scale data processing. Experience with CI/CD pipelines for data workflows in Azure DevOps Hands-on programming experience in ADB, ADF, Synapse, Python, PySpark, SQL Hands-on expertise in cloud services like AWS, and/or Microsoft Azure eco system Solid understanding of ETL methodologies in a multi-tiered stack with Data Modelling & Data Governance Experience with BI, and data analytics databases Experience in converting business problems/challenges to technical solutions considering security, performance, scalability etc. Experience in Enterprise grade solution implementations. Experience in performance bench marking enterprise applications Strong stakeholder, client, team, process & delivery management skills To qualify for the role, you must have Flexible and proactive/self-motivated working style with strong personal ownership of problem resolution. Excellent communicator (written and verbal formal and informal). Ability to multi-task under pressure and work independently with minimal supervision. Strong verbal and written communication skills. Must be a team player and enjoy working in a cooperative and collaborative team environment. Adaptable to new technologies and standards. Participate in all aspects of Big Data solution delivery life cycle including analysis, design, development, testing, production deployment, and support. Minimum 7 years hand-on experience in one or more of the above areas. Minimum 8-11 years industry experience Ideally, you’ll also have Project management skills Client management skills Solutioning skills Nice to have: Knowledge in data security best practices Knowledge in Data Architecture Design Patterns What We Look For People with technical experience and enthusiasm to learn new things in this fast-moving environment What Working At EY Offers At EY, we’re dedicated to helping our clients, from start–ups to Fortune 500 companies — and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career The freedom and flexibility to handle your role in a way that’s right for you EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less
Posted 11 hours ago
15.0 years
0 Lacs
Kochi, Kerala, India
On-site
Introduction Joining the IBM Technology Expert Labs teams means you’ll have a career delivering world-class services for our clients. As the ultimate expert in IBM products, you’ll bring together all the necessary technology and services to help customers solve their most challenging problems. Working in IBM Technology Expert Labs means accelerating the time to value confidently and ensuring speed and insight while our clients focus on what they do best—running and growing their business. Excellent onboarding and industry-leading learning culture will set you up for a positive impact, while advancing your career. Our culture is collaborative and experiential. As part of a team, you will be surrounded by bright minds and keen co-creators—always willing to help and be helped—as you apply passion to work that will positively impact the world around us. Your Role And Responsibilities As a Delivery Consultant, you will work closely with IBM clients and partners to design, deliver, and optimize IBM Technology solutions that align with your clients’ goals. In this role, you will apply your technical expertise to ensure world-class delivery while leveraging your consultative skills such as problem-solving issue- / hypothesis-based methodologies, communication, and service orientation skills. As a member of IBM Technology Expert Labs, a team that is client focused, courageous, pragmatic, and technical, you’ll collaborate with clients to optimize and trailblaze new solutions that address real business challenges. If you are passionate about success with both your career and solving clients’ business challenges, this role is for you. To help achieve this win-win outcome, a ‘day-in-the-life’ of this opportunity may include, but not be limited to… Solving Client Challenges Effectively: Understanding clients’ main challenges and developing solutions that helps them reach true business value by working thru the phases of design, development integration, implementation, migration and product support with a sense of urgency . Agile Planning and Execution: Creating and executing agile plans where you are responsible for installing and provisioning, testing, migrating to production, and day-two operations. Technical Solution Workshops: Conducting and participating in technical solution workshops. Building Effective Relationships: Developing successful relationships at all levels —from engineers to CxOs—with experience of navigating challenging debate to reach healthy resolutions. Self-Motivated Problem Solver: Demonstrating a natural bias towards self-motivation, curiosity, initiative in addition to navigating data and people to find answers and present solutions. Collaboration and Communication: Strong collaboration and communication skills as you work across the client, partner, and IBM team. Preferred Education Bachelor's Degree Required Technical And Professional Expertise In-depth knowledge of the IBM Data & AI portfolio. 15+ years of experience in software services 10+ years of experience in the planning, design, and delivery of one or more products from the IBM Data Integration, IBM Data Intelligence product platforms Experience in designing and implementing solution on IBM Cloud Pak for Data, IBM DataStage Nextgen, Orchestration Pipelines 10+ years’ experience with ETL and database technologies, Experience in architectural planning and implementation for the upgrade/migration of these specific products Experience in designing and implementing Data Quality solutions Experience with installation and administration of these products Excellent understanding of cloud concepts and infrastructure Excellent verbal and written communication skills are essential Preferred Technical And Professional Experience Experience with any of DataStage, Informatica, SAS, Talend products Experience with any of IKC, IGC,Axon Experience with programming languages like Java/Python Experience in AWS, Azure Google or IBM cloud platform Experience with Redhat OpenShift Good to have Knowledge: Apache Spark , Shell scripting, GitHub, JIRA Show more Show less
Posted 11 hours ago
15.0 years
0 Lacs
Kochi, Kerala, India
On-site
Introduction Joining the IBM Technology Expert Labs teams means you'll have a career delivering world-class services for our clients. As the ultimate expert in IBM products, you'll bring together all the necessary technology and services to help customers solve their most challenging problems. Working in IBM Technology Expert Labs means accelerating the time to value confidently and ensuring speed and insight while our clients focus on what they do best—running and growing their business. Excellent onboarding and industry-leading learning culture will set you up for a positive impact, while advancing your career. Our culture is collaborative and experiential. As part of a team, you will be surrounded by bright minds and keen co-creators—always willing to help and be helped—as you apply passion to work that will positively impact the world around us. Your Role And Responsibilities This Candidate is responsible for: DB2 installation and configuration on the below environments. On Prem Multi Cloud Redhat Open shift Cluster HADR Non-DPF and DPF. Migration of other databases to Db2(eg TERADATA / SNOWFLAKE / SAP/ Cloudera to db2 migration) Create high-level designs, detail level designs, maintaining product roadmaps which includes both modernization and leveraging cloud solutions Design scalable, performant, and cost-effective data architectures within the Lakehouse to support diverse workloads, including reporting, analytics, data science, and AI/ML. Perform health check of the databases, make recommendations and deliver tuning. At the Database and system level. Deploy DB2 databases as containers within Red Hat OpenShift clusters Configure containerized database instances, persistent storage, and network settings to optimize performance and reliability. Lead the architectural design and implementation of solutions on IBM watsonx.data, ensuring alignment with overall enterprise data strategy and business objectives. Define and optimize the watsonx.data ecosystem, including integration with other IBM watsonx components (watsonx.ai, watsonx.governance) and existing data infrastructure (DB2, Netezza, cloud data sources) Establish best practices for data modeling, schema evolution, and data organization within the watsonx.data lakehouse Act as a subject matter expert on Lakehouse architecture, providing technical leadership and guidance to data engineering, analytics, and development teams. Mentor junior architects and engineers, fostering their growth and knowledge in modern data platforms. Participate in the development of architecture governance processes and promote best practices across the organization. Communicate complex technical concepts to both technical and non-technical stakeholders. Required Technical And Professional Expertise 15+ years of experience in data architecture, data engineering, or a similar role, with significant hands-on experience in cloud data platforms Strong proficiency in DB2, SQL and Python. Strong understanding of: Database design and modelling(dimensional, normalized, NoSQL schemas) Normalization and indexing Data warehousing and ETL processes Cloud platforms (AWS, Azure, GCP) Big data technologies (e.g., Hadoop, Spark) Database Migration project experience from one database to another database (target database Db2). Experience in deployment of DB2 databases as containers within Red Hat OpenShift clusters and configure containerized database instances, persistent storage, and network settings to optimize performance and reliability. Excellent communication, collaboration, problem-solving, and leadership skills Preferred Technical And Professional Experience Experience with machine learning environments and LLMs Certification in IBM watsonx.data or related IBM data and AI technologies Hands-on experience with Lakehouse platform (e.g., Databricks, Snowflake) Having exposure to implement or understanding of DB replication process Experience with integrating watsonx.data with GenAI or LLM initiatives (e.g., RAG architectures). Experience with NoSQL databases (e.g., MongoDB, Cassandra). Experience in data modeling tools (e.g., ER/Studio, ERwin). Knowledge of data governance and compliance standards (e.g., GDPR, HIPAA).Soft Skills Show more Show less
Posted 11 hours ago
3.0 - 5.0 years
0 Lacs
Kanayannur, Kerala, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Job Title: Python Full Stack Developer with Azure Cloud and SQL Experience Job Description: As a Python Full Stack Developer, you will be responsible for designing, developing, and maintaining web applications using Python and related technologies. You will work closely with cross-functional teams to deliver high-quality software solutions. The ideal candidate will also have experience with Azure Cloud services and SQL development. Key Responsibilities: Design and develop scalable web applications using Python, Django/Flask, and JavaScript frameworks (e.g., React, Angular, or Vue.js). Collaborate with UI/UX designers to create user-friendly interfaces and enhance user experience. Implement RESTful APIs and integrate with third-party services. Manage and optimize databases using SQL (e.g., PostgreSQL, MySQL, or SQL Server). Deploy and manage applications on Azure Cloud, utilizing services such as Azure App Services, Azure Functions, and Azure SQL Database. Write clean, maintainable, and efficient code following best practices and coding standards. Conduct code reviews and provide constructive feedback to team members. Troubleshoot and resolve application issues, ensuring high availability and performance. Stay updated with emerging technologies and industry trends to continuously improve development processes. Qualifications: Requires 3-5 years minimum prior relevant experience Bachelor’s degree in Computer Science, Information Technology, or a related field. Proven experience as a Full Stack Developer with a strong focus on Python development. Proficiency in front-end technologies such as HTML, CSS, and JavaScript frameworks (React, Angular, or Vue.js). Experience with back-end frameworks such as Django or Flask. Strong knowledge of SQL and experience with database design and management. Hands-on experience with Azure Cloud services and deployment strategies. Familiarity with version control systems (e.g., Git). Excellent problem-solving skills and attention to detail. Strong communication and teamwork abilities. Preferred Qualifications: Experience with containerization technologies (e.g., Docker, Kubernetes). Knowledge of DevOps practices and CI/CD pipelines. Familiarity with Agile development methodologies. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less
Posted 11 hours ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Azure, Microsoft's cloud computing platform, has seen a rapid growth in demand for skilled professionals in India. The job market for Azure roles in India is booming, with numerous opportunities available for job seekers with the right skills and experience.
These cities are known for their thriving tech industries and have a high demand for Azure professionals.
The average salary range for Azure professionals in India varies based on experience and skill level. Entry-level positions can expect to earn around INR 6-8 lakhs per annum, while experienced professionals can earn upwards of INR 15 lakhs per annum.
A typical career path in Azure roles may start as a Junior Developer, progress to a Senior Developer, then move on to a Tech Lead position. With experience and additional certifications, professionals can advance to roles such as Solutions Architect, Cloud Consultant, or Azure DevOps Engineer.
In addition to Azure expertise, professionals in this field may benefit from having skills in: - Cloud computing concepts - Programming languages such as C# or Python - Networking fundamentals - Security and compliance knowledge
As you explore opportunities in Azure jobs in India, remember to continuously upskill and stay updated with the latest trends in cloud computing. Prepare thoroughly for interviews and showcase your expertise confidently to land your dream job in this thriving field. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2