Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
6.0 years
0 Lacs
Goregaon, Maharashtra, India
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Manager Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. * Why PWC At PwC , you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC , we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Job Description & Summary: A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. Responsibilities: • Designs, implements and maintains reliable and scalable data infrastructure • Writes, deploys and maintains software to build, integrate, manage, maintain , and quality-assure data •Develops, and delivers large-scale data ingestion, data processing, and data transformation projects on the Azure cloud • Mentors and shares knowledge with the team to provide design reviews, discussions and prototypes • Works with customers to deploy, manage, and audit standard processes for cloud products • Adheres to and advocates for software & data engineering standard processes ( e.g. Data Engineering pipelines, unit testing, monitoring, alerting, source control, code review & documentation) • Deploys secure and well-tested software that meets privacy and compliance requirements; develops, maintains and improves CI / CD pipeline • Service reliability and following site-reliability engineering standard processes: on-call rotations for services they maintain , responsible for defining and maintaining SLAs. Designs, builds, deploys and maintains infrastructure as code. Containerizes server deployments. • Part of a cross-disciplinary team working closely with other data engineers, Architects, software engineers, data scientists, data managers and business partners in a Scrum/Agile setup Mandatory skill sets: ‘Must have’ knowledge, skills and experiences Synapse, ADF, spark, SQL, pyspark , spark-SQL, Preferred skill sets: ‘Good to have’ knowledge, skills and experiences C osmos DB, Data modeling, Databricks, PowerBI , experience of having built analytics solution with SAP as data source for ingestion pipelines. Depth: Candidate should have in-depth hands-on experience w.r.t end to end solution designing in Azure data lake, ADF pipeline development and debugging, various file formats, Synapse and Databricks with excellent coding skills in PySpark and SQL with logic building capabilities. He/she should have sound knowledge of optimizing workloads. Years of experience required : 6 to 9 years relevant experience Education qualification: BE, B.Tech , ME, M,Tech , MBA, MCA (60% above ) Expected Joining: 3 weeks Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Master of Engineering, Bachelor of Engineering, Bachelor of Technology, Master of Business Administration Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills Structured Query Language (SQL) Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Airflow, Apache Hadoop, Azure Data Factory, Coaching and Feedback, Communication, Creativity, Data Anonymization, Data Architecture, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Databricks Unified Data Analytics Platform, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling {+ 32 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date Show more Show less
Posted 4 days ago
1.0 - 3.0 years
0 Lacs
Bengaluru, Karnataka, India
Remote
Job Description Who We Are At Goldman Sachs, we connect people, capital and ideas to help solve problems for our clients. We are a leading global financial services firm providing investment banking, securities and investment management services to a substantial and diversified client base that includes corporations, financial institutions, governments and individuals. INVESTMENT BANKING Goldman Sachs Investment Banking (IB) works on some of the most complex financial challenges and transactions in the market today. Whether advising on a merger, providing financial solutions for an acquisition, or structuring an initial public offering, we handle projects that help clients at major milestones. We work with corporations, pension funds, financial sponsors, and governments and are team of strong analytical thinkers, who have a passion for producing out-of-the-box ideas. The Goldman Sachs Group, Inc. is a leading global investment banking, securities and investment management firm that provides a wide range of financial services to a substantial and diversified client base that includes corporations, financial institutions, governments, and individuals. Founded in 1869, the firm is headquartered in New York and maintains offices in all major financial centers around the world. Who We Look For You are a proven full stack engineer. Not only strong technically, you have shown that you can work effectively with product managers, designers and other engineering teams. You have a fierce sense of ownership, caring deeply about the quality of everything that you deliver into your clients’ hands. You love the challenge of engineering and are confident in your ability to bring clarity and direction to ambiguous problem spaces. You work well in a fast-paced environment while deeply invest in long term quality and efficiency. Basic Qualifications 1 to 3 years of hands-on development experience in Core Java (Java 11-21) or Python, and proficiency in backend technologies such as Databases (SQL/No-SQL), Elastic Search, MongoDB, Spring framework, REST, GraphQL, Hibernate, etc. Experience with front-end development with React, Redux, Vue, Typescript, and/or similar frameworks. Demonstrated experience operating in a fast-paced Agile/Scrum setup with a global/remote team. Knowledge of developing and deploying applications in public cloud (AWS, GCP or Azure) or K8S. Experience with implementing unit tests, integration tests, Test Driven Development. Strong development, analytical and problem-solving skills. Knowledge of prompt engineering, LLM, AI Agents, etc. Preferred Qualifications Excellent communication skills and experience working directly with business stakeholders. Data modeling, warehousing (Snowflake, AWS Glue/ EMR, Apache Spark ) and a strong understanding of data engineering practices. Technical, Team or Project leadership experience. Some experience using Infrastructure-As-Code tools (e.g. AWS CDK, Terraform, CloudFormation) Experience with reactive, event-based architectures. Goldman Sachs Engineering Culture At Goldman Sachs, our Engineers don’t just make things – we make things possible. Change the world by connecting people and capital with ideas. Solve the most challenging and pressing engineering problems for our clients. Join our engineering teams that build massively scalable software and systems, architect low latency infrastructure solutions, proactively guard against cyber threats, and leverage machine learning alongside financial engineering to continuously turn data into action. Create new businesses, transform finance, and explore a world of opportunity at the speed of markets. Engineering is at the critical center of our business, and our dynamic environment requires innovative strategic thinking and immediate, real solutions. Want to push the limit of digital possibilities? Start here! © The Goldman Sachs Group, Inc., 2025. All rights reserved. Goldman Sachs is an equal employment/affirmative action employer Female/Minority/Disability/Veteran/Sexual Orientation/Gender Identity. Show more Show less
Posted 4 days ago
8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description As a Principal Engineer for Networking Control services, you will spearhead the development of innovative services in the Networking Automation & Health domain. This initiative involves building a robust software stack from the ground up to monitor and manage network infrastructure, automatically resolve network issues, and ensure high bandwidth with low latency connections essential for LLM workloads. Our team is developing a cutting-edge software stack to monitor and manage network infrastructure. This system will automatically resolve network issues and ensure high-bandwidth, low-latency connections that are essential for large language model (LLM) workloads. We are seeking a Principal Software Engineer with strong expertise in distributed systems, microservices, high volume data processing and operational excellence. The ideal candidate should possess a strong sense of ownership, Career Level - IC4 Responsibilities Principal Member of Technical Staff - Network Control Services The Oracle Cloud Infrastructure (OCI) team can provide you the opportunity to build and operate a suite of massive scale, integrated cloud services in a broadly distributed, multi-tenant cloud environment. OCI is committed to providing the best in cloud products that meet the needs of our customers who are tackling some of the world’s biggest challenges. We offer unique opportunities for smart, hands-on engineers with the expertise and passion to solve difficult problems in distributed highly available services and virtualized infrastructure. At every level, our engineers have a significant technical and business impact designing and building innovative new systems to power our customer’s business critical applications. Who are we looking for? We are looking for engineers with distributed systems experience. You should have experience with the design of major features and launching them into production. You’ve operated high-scale services and understand how to make them more resilient. You work on most projects and tasks independently. You have experience working with services that require data to travel long distances, but have to abide by compliance and regulations. The ideal candidate will own the software design and development for major components of Oracle’s Cloud Infrastructure. You should be both a rock-solid coder and a distributed systems generalist, able to dive deep into any part of the stack and low-level systems, as well as design broad distributed system interactions. You should value simplicity and scale, work comfortably in a collaborative, agile environment, and be excited to learn. What are the biggest challenges for the team? The team is building a brand new service.The dynamic and fast growth of the business is driving us to build brand new innovative technologies. We understand that software is living and needs investment. The challenge is making the right tradeoffs, communicating those decisions effectively, and crisp execution. We need engineers who can build services that can reliably protect our customer cloud environment. We need engineers who can figure out how we can keep up our solution in a fast pace to securely protect our customers. We need engineers who can build services that enable us to offer even more options to customers and contribute to the overall growth of Oracle Cloud. Required Qualifications BS or MS degree in Computer Science or relevant technical field involving coding or equivalent practical experience 8+ years of total experience in software development Demonstrated ability to write great code using Java, GoLang, C#, or similar OO languages Proven ability to deliver products and experience with the full software development lifecycle Experience working on large-scale, highly distributed services infrastructure Experience working in an operational environment with mission-critical tier-one livesite servicing Systematic problem-solving approach, strong communication skills, a sense of ownership, and drive Experience designing architectures that demonstrate deep technical depth in one area, or span many products, to enable high availability, scalability, market-leading features and flexibility to meet future business demands Preferred Qualifications Hands-on experience developing and maintaining services on a public cloud platform (e.g., AWS, Azure, Oracle) Knowledge of Infrastructure as Code (IAC) languages, preferably Terraform Strong knowledge of databases (SQL and NoSQL) Experience with Kafka, Apache Spark and other big data technologies About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law. Show more Show less
Posted 4 days ago
3.0 years
0 Lacs
Kondapur, Telangana, India
On-site
What You'll Do Design & build backend components of our MLOps platform in Python on AWS. Collaborate with geographically distributed cross-functional teams. Participate in on-call rotation with the rest of the team to handle production incidents. What You Know At least 3+ years of professional backend development experience with Python. Experience with web development frameworks such as Flask or FastAPI. Experience working with WSGI & ASGI web servers such as Gunicorn, Uvicorn etc. Experience with concurrent programming designs such as AsyncIO. Experience with containers (Docker) and container platforms like AWS ECS or AWS EKS Experience with unit and functional testing frameworks. Experience with public cloud platforms like AWS. Experience with CI/CD practices, tools, and frameworks. Nice to have skills Experience with Apache Kafka and developing Kafka client applications in Python. Experience with MLOps platorms such as AWS Sagemaker, Kubeflow or MLflow. Experience with big data processing frameworks, preferably Apache Spark. Experience with DevOps & IaC tools such as Terraform, Jenkins etc. Experience with various Python packaging options such as Wheel, PEX or Conda. Experience with metaprogramming techniques in Python. Education Bachelor’s degree in Computer Science, Information Systems, Engineering, Computer Applications, or related field. Benefits In addition to competitive salaries and benefits packages, Nisum India offers its employees some unique and fun extras: Continuous Learning - Year-round training sessions are offered as part of skill enhancement certifications sponsored by the company on an as need basis. We support our team to excel in their field. Parental Medical Insurance - Nisum believes our team is the heart of our business and we want to make sure to take care of the heart of theirs. We offer opt-in parental medical insurance in addition to our medical benefits. Activities -From the Nisum Premier League's cricket tournaments to hosted Hack-a-thon, Nisum employees can participate in a variety of team building activities such as skits, dances performance in addition to festival celebrations. Free Meals - Free snacks and dinner is provided on a daily basis, in addition to subsidized lunch. Show more Show less
Posted 4 days ago
0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology Your Role And Responsibilities As a Data Engineer at IBM, you'll play a vital role in the development, design of application, provide regular support/guidance to project teams on complex coding, issue resolution and execution. Your Primary Responsibilities Include Lead the design and construction of new solutions using the latest technologies, always looking to add business value and meet user requirements. Strive for continuous improvements by testing the build solution and working under an agile framework. Discover and implement the latest technologies trends to maximize and build creative solutions Preferred Education Master's Degree Required Technical And Professional Expertise Experience with Apache Spark (PySpark): In-depth knowledge of Spark’s architecture, core APIs, and PySpark for distributed data processing. Big Data Technologies: Familiarity with Hadoop, HDFS, Kafka, and other big data tools. Data Engineering Skills: Strong understanding of ETL pipelines, data modeling, and data warehousing concepts. Strong proficiency in Python: Expertise in Python programming with a focus on data processing and manipulation. Data Processing Frameworks: Knowledge of data processing libraries such as Pandas, NumPy. SQL Proficiency: Experience writing optimized SQL queries for large-scale data analysis and transformation. Cloud Platforms: Experience working with cloud platforms like AWS, Azure, or GCP, including using cloud storage systems Preferred Technical And Professional Experience Define, drive, and implement an architecture strategy and standards for end-to-end monitoring. Partner with the rest of the technology teams including application development, enterprise architecture, testing services, network engineering, Good to have detection and prevention tools for Company products and Platform and customer-facing Show more Show less
Posted 4 days ago
10.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
This job is with Standard Chartered Bank, an inclusive employer and a member of myGwork – the largest global platform for the LGBTQ+ business community. Please do not contact the recruiter directly. Job Summary The Chapter Lead Backend development is a role is a hands-on developer role focusing on back-end development and is accountable for people management and capability development of their Chapter members. Responsibilities in detail are: Responsibilities Oversees the execution of functional standards and best practices and provide technical assistance to the members of their Chapter. Responsible for the quality of the code repository where applicable. Maintain exemplary coding standards within the team, contributing to code base development and code repository management. Perform code reviews to guarantee quality and promote a culture of technical excellence in Java development. Function as a technical leader and active coder, setting and enforcing domain-specific best practices and technology standards. Allocate technical resources and personal coding time effectively, balancing leadership with hands-on development tasks. Maintain a dual focus on leadership and hands-on development, committing code while steering the chapter's technical direction. Oversee Java backend development standards within the chapter across squads, ensuring uniform excellence and adherence to best coding practices. Harmonize Java development methodologies across the squad, guiding the integration of innovative practices that align with the bank's engineering strategies. Advocate for the adoption of cutting-edge Java technologies and frameworks, driving the evolution of backend practices to meet future challenges. Strategy Oversees the execution of functional standards and best practices and provide technical assistance to the members of their Chapter. Responsible for the quality of the code repository where applicable. Acts as a conduit for the wider domain strategy, for example technical standards. Prioritises and makes available capacity for technical debt. This role is around capability building, it is not to own applications or delivery. Actively shapes and drives towards the Bank-Wide engineering strategy and programmes to uplift standards and steer the technological direction towards excellence Act as a custodian for Java backend expertise, providing strategic leadership to enhance skill sets and ensure the delivery of high-performance banking solutions. Business Experienced practitioner and hands on contribution to the squad delivery for their craft (Eg. Engineering). Responsible for balancing skills and capabilities across teams (squads) and hives in partnership with the Chief Product Owner & Hive Leadership, and in alignment with the fixed capacity model. Responsible to evolve the craft towards improving automation, simplification and innovative use of latest market trends. Collaborate with product owners and other tech leads to ensure applications meet functional requirements and strategic objectives Processes Promote a feedback-rich environment, utilizing internal and external insights to continuously improve chapter operations. Adopt and embed the Change Delivery Standards throughout the lifecycle of the product / service. Ensure role, job descriptions and expectations are clearly set and periodic feedback provided to the entire team. Follows the chapter operating model to ensure a system exists to continue to build capability and performance of the chapter. Chapter Lead may vary based upon the specific chapter domain its leading. People & Talent Accountable for people management and capability development of their Chapter members. Reviews metrics on capabilities and performance across their area, has improvement backlog for their Chapters and drives continual improvement of their chapter. Focuses on the development of people and capabilities as the highest priority. Risk Management Responsible for effective capacity risk management across the Chapter with regards to attrition and leave plans. Ensures the chapter follows the standards with respect to risk management as applicable to their chapter domain. Adheres to common practices to mitigate risk in their respective domain. Design and uphold a robust risk management plan, with contingencies for succession and role continuity, especially in critical positions Governance Ensure all artefacts and assurance deliverables are as per the required standards and policies (e.g., SCB Governance Standards, ESDLC etc.). Regulatory & Business Conduct Ensure a comprehensive understanding of and adherence to local banking laws, anti-money laundering regulations, and other compliance mandates. Conduct business activities with a commitment to legal and regulatory compliance, fostering an environment of trust and respect. Key stakeholders Chapter Area Lead Sub-domain Tech Lead Domain Architect Business Leads / Product owners Other Responsibilities Champion the company's broader mission and values, integrating them into daily operations and team ethos. Undertake additional responsibilities as necessary, ensuring they contribute to the organisation's strategic aims and adhere to Group and other Relevant policies. Skills And Experience Hands-on Java Development Leadership in System Architecture Database Proficiency CI / CD Container Platforms - Kubernetes / OCP / Podman Qualifications Bachelor's or Master's degree in Computer Science, Computer Engineering, or related field, with preference given to advanced degrees. 10 years of professional Java development experience, including a proven record in backend system architecture and API design. At least 5 years in a leadership role managing diverse development teams and spearheading complex Java projects. Proficiency in a range of Java frameworks such as Spring, Spring Boot, and Hibernate, and an understanding of Apache Struts. Proficient in Java, with solid expertise in core concepts like object-oriented programming, data structures, and complex algorithms. Knowledgeable in web technologies, able to work with HTTP, RESTful APIs, JSON, and XML Expert knowledge of relational databases such as Oracle, MySQL, PostgreSQL, and experience with NoSQL databases like MongoDB, Cassandra is a plus. Familiarity with DevOps tools and practices, including CI/CD pipeline deployment, containerisation technologies like Docker and Kubernetes, and cloud platforms such as AWS, Azure, or GCP. Solid grasp of front-end technologies (HTML, CSS, JavaScript) for seamless integration with backend systems. Strong version control skills using tools like Git / Bitbucket, with a commitment to maintaining high standards of code quality through reviews and automated tests. Exceptional communication and team-building skills, with the capacity to mentor developers, facilitate technical skill growth, and align team efforts with strategic objectives. Strong problem-solving skills and attention to detail. Excellent communication and collaboration skills. Ability to work effectively in a fast-paced, dynamic environment. About Standard Chartered We're an international bank, nimble enough to act, big enough for impact. For more than 170 years, we've worked to make a positive difference for our clients, communities, and each other. We question the status quo, love a challenge and enjoy finding new opportunities to grow and do better than before. If you're looking for a career with purpose and you want to work for a bank making a difference, we want to hear from you. You can count on us to celebrate your unique talents and we can't wait to see the talents you can bring us. Our purpose, to drive commerce and prosperity through our unique diversity, together with our brand promise, to be here for good are achieved by how we each live our valued behaviours. When you work with us, you'll see how we value difference and advocate inclusion. Together We Do the right thing and are assertive, challenge one another, and live with integrity, while putting the client at the heart of what we do Never settle, continuously striving to improve and innovate, keeping things simple and learning from doing well, and not so well Are better together, we can be ourselves, be inclusive, see more good in others, and work collectively to build for the long term What We Offer In line with our Fair Pay Charter, we offer a competitive salary and benefits to support your mental, physical, financial and social wellbeing. Core bank funding for retirement savings, medical and life insurance, with flexible and voluntary benefits available in some locations. Time-off including annual leave, parental/maternity (20 weeks), sabbatical (12 months maximum) and volunteering leave (3 days), along with minimum global standards for annual and public holiday, which is combined to 30 days minimum. Flexible working options based around home and office locations, with flexible working patterns. Proactive wellbeing support through Unmind, a market-leading digital wellbeing platform, development courses for resilience and other human skills, global Employee Assistance Programme, sick leave, mental health first-aiders and all sorts of self-help toolkits A continuous learning culture to support your growth, with opportunities to reskill and upskill and access to physical, virtual and digital learning. Being part of an inclusive and values driven organisation, one that embraces and celebrates our unique diversity, across our teams, business functions and geographies - everyone feels respected and can realise their full potential. Show more Show less
Posted 4 days ago
0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
About Nourma We're building the AI-powered finance operating system that transforms how companies manage their financial operations. Our Decision Intelligence platform combines LLMs, multi-agent systems, and real-time data integration to create an intelligent finance team The Role We're seeking an AI/ML Engineer with deep expertise in LangChain, LlamaIndex, PydanticAI, and modern Python frameworks to architect and build the core intelligence layer of Nourma. Key Responsibilities LLM Orchestration & RAG Development (LangChain/LlamaIndex/PydanticAI Focus) Architect complex LangChain pipelines for multi-agent financial workflows Build production RAG systems using LlamaIndex for financial document retrieval Implement agents with strong type safety and structured outputs Design and implement: Chain-of-thought reasoning for financial analysis Dynamic prompt routing based on query complexity Memory management for long-running financial conversations Tool integration for agents to access GL, bank feeds, and operational data Optimise token usage and response latency for real-time WhatsApp interactions API Development & Integration (FastAPI Focus) Build high-performance FastAPI services for: Agent-to-agent communication protocols WhatsApp webhook processing with sub-second response Real-time financial data APIs for frontend consumption Design GraphQL schemas for flexible financial data queries Implement WebSocket connections for live financial updates Create robust error handling and retry mechanisms for financial integrations Vector Database & Semantic Search (Chroma Focus) Design and optimise Chroma collections for: Financial document embeddings (loan agreements, invoices) Conversation history and context retrieval Business logic and rule storage Implement hybrid search combining vector similarity and metadata filtering Build embedding pipelines for various document types (PDFs, emails, chat logs) Infrastructure & Scalability Deploy and manage LLM applications. Implement Redis caching strategies for LLM responses and financial data Design microservices architecture for agent deployment Set up monitoring and observability for AI pipelines Technical Requirements Must Have - Core Technologies Expert-level proficiency in: LangChain : Custom chains, agents, tools, memory systems LlamaIndex : Document stores, indices, query engines PydanticAI : Agent frameworks, type-safe LLM interactions, structured outputs FastAPI : Async programming, dependency injection, middleware Strong experience with Python async/await patterns Production experience with Chroma or similar vector databases Proficiency with Redis for caching and session management Experience with data pipeline and storage tools (Kafka, Spark, Airflow) for building scalable systems Nice to Have Knowledge of PostgreSQL and BigQuery for analytical workloadsUnderstanding of financial data structures (journal entries, chart of accounts) Experience with financial APIs (QuickBooks, Xero, Plaid, banking APIs) Knowledge of data consistency requirements for financial systems GraphQL schema design and optimisation Experience with WhatsApp Business API Background in fintech or accounting software Tech Stack LLMs : GPT-4, Claude, open-source models ML/AI : LangChain, LlamaIndex, PydanticAI, PyTorch, Transformers Vector DB : Chroma Data : PostgreSQL, BigQuery, Apache Kafka, Spark, Airflow APIs : FastAPI, GraphQL Infrastructure : AWS/GCP, Kubernetes, Docker, Redis Monitoring : Prometheus, Grafana, OpenTelemetry What We Offer Work on cutting-edge problems combining LLMs with real-time financial data Build systems processing millions of financial transactions Direct impact on how thousands of companies manage finances Work directly with founders and shape technical direction Ideal Candidate Profile You're excited about: Building production LangChain and PydanticAI applications at scale Creating high-performance APIs that power AI agents Designing scalable architectures for financial data processing Working with cutting-edge LLM technologies You've probably: Built production LangChain/LlamaIndex/PydanticAI applications serving 1000+ users Created FastAPI services handling high-throughput LLM requests Worked with vector databases in production environments Designed data processing pipelines for financial or similar domains Contributed to open-source AI/ML projects Show more Show less
Posted 4 days ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Description We are looking for a Technical Account Manager to join our growing global team at Sectigo. As a Technical Account Manager (TAM) at Sectigo, you will play a pivotal role in providing Premier Support to high value enterprise customers, delivering an elevated level of access to Sectigo experts and features. Acting as the single point of contact for issue management, you will ensure consistent, proactive, and personalized service while collaborating with internal teams to optimize solutions for customer success. Sectigo Premier Support is designed to provide 24/7/365 access to experts, minimizing downtime, maximizing productivity, and driving customer success. This role is critical to achieving Sectigo’s mission of delivering a world-class customer experience. This is a full-time and in-office position, working 5 days a week from our Chennai office at DLF IT Park, Manapakkam. Here are the core functions, responsibilities, and expectations for this role: Technical Support And Guidance Facilitate timely solutions for technical support problems, ensuring minimal disruption and productivity loss. Collaborate with internal technical teams to proactively monitor and manage customer issues. Troubleshoot and resolve technical problems with a strong understanding of Sectigo's PKI/Digital Security products, including SSL/TLS, S/MIME, and Certificate Lifecycle Management (CLM). Adhere to support metrics, including SLAs, response times, and resolution times, while meeting and exceeding customer expectations. Customer Relationship Management Provide Premier-quality account management to assigned customers, ensuring they fully receive the benefits of the Premier Support program. Act as the primary point of contact, delivering advanced troubleshooting and maintaining strategic relationships. Build trust by providing consistency, accountability, and visibility tailored to the customer's business and product needs. Conduct periodic business reviews to discuss technical health, actionable insights, and personalized assessments. Product Expertise: Provide product training and technical advice to clients, ensuring they are empowered to use Sectigo solutions effectively. Maintain expertise in Sectigo's product suite and related technologies, including Microsoft, Cisco, AWS, Citrix, Linux, Apache, RedHat, and Windows operating systems. Demonstrate strong knowledge of networking concepts (TCP/IP, DNS, SMTP, SSH, SSL) and information security products (antivirus, spam filters, email encryption, etc.). Leverage deep technical skills to proactively manage key events and prevent disruptions for customers. Account Management and Growth: Manage customer relationships to ensure satisfaction, retention, and long-term success. Identify opportunities by analyzing customer needs and usage trends. Act as a trusted advisor by providing personalized, data-driven insights and technical health reviews to achieve customer objectives. Advocate for customers by providing feedback to Sectigo's engineering and product teams based on customer insights. Other duties as assigned and related to the nature of this role and company initiatives. Qualifications Education: Bachelor’s degree in business, information technology, or a related field (or equivalent experience) is strongly preferred. Experience Minimum 3 years of dedicated customer support, account management, or client success experience in a technical or service-related field. Proven ability to work effectively in team environments and manage cross-functional communication. Experience in the security industry or with technical support products is a strong asset. Ideal Candidate Profiles, Talents, And Desired Qualifications Account Management: Proven ability to build and nurture long-term customer relationships. Experience in enterprise account management or a similar customer-facing role. Experience conducting business reviews and delivering customer-centric solutions. Technical Expertise Familiarity with enterprise-grade technical environments, including Microsoft products, AWS, Cisco, and Java. Understanding of PKI/Digital Security products. Expertise in operating systems (Linux, Apache, RedHat, Windows) and networking concepts (TCP/IP, DNS, SMTP, etc.). Hands-on experience troubleshooting server-level and security product issues. Communication And Problem-Solving Excellent interpersonal and organizational skills to manage multiple accounts effectively. Strong problem-solving skills to address technical challenges and provide timely resolutions. Soft Skills Ability to work collaboratively in a team environment and adapt to flexible schedules. Strong relationship-building, problem-solving, and customer service skills. Ability to manage multiple accounts and prioritize tasks effectively. Analytical mindset with a proactive approach to identifying and solving issues. Willingness to adjust working hours based on customer needs and business demands. Fluency in English with excellent verbal and written communication skills. Additional language proficiency is a plus. Additional Information Global team. Global reach. Global impact. At Sectigo, we believe doing good is good business. Our strength and our success come from our team of passionate, engaged individuals who make a difference, both locally and globally. Our commitment to engagement is rooted in an unconditionally inclusive workforce, embodying our unique perspectives, heritages, and backgrounds, all as diverse as the experiences of each Sectigo employee. Importantly, we strive to be recognized not only as the CLM leader but also for our intentional efforts to promote employees into the roles that most challenge and excite them, into experiences that allow them to grow their interests as we grow the business. We are committed to bringing a little bit of fun and a whole lot of happiness into everything we do so that our work – and our team members – reflect the positive outcomes we deliver to our customers every day. Show more Show less
Posted 4 days ago
7.0 years
0 Lacs
India
On-site
Company Description 👋🏼We're Nagarro. We are a Digital Product Engineering company that is scaling in a big way! We build products, services, and experiences that inspire, excite, and delight. We work at scale — across all devices and digital mediums, and our people exist everywhere in the world (18000+ experts across 38 countries, to be exact). Our work culture is dynamic and non-hierarchical. We're looking for great new colleagues. That's where you come in. Job Description REQUIREMENTS: Total experience 7+ years Extensive experience in back-end development utilizing Java 8 or higher, Spring Framework (Core/Boot/MVC), Hibernate/JPA, and Microservices Architecture. Strong working experience in front-end applications using technologies such as TypeScript, JavaScript, Angular 10 Hands-on experience with REST APIs, Caching system (e.g Redis) and messaging systems like Kafka etc. Proficiency in Service-Oriented Architecture (SOA) and Web Services (Apache CXF, JAX-WS, JAX-RS, SOAP, REST). Hands-on experience with multithreading, and cloud development. Strong working experience in Data Structures and Algorithms, Unit Testing, and Object-Oriented Programming (OOP) principles. Hands-on experience with relational databases such as SQL Server, Oracle, MySQL, and PostgreSQL. Experience with DevOps tools and technologies such as Ansible, Docker, Kubernetes, Puppet, Jenkins, and Chef. Proficiency in build automation tools like Maven, Ant, and Gradle. Hands on experience on cloud technologies such as AWS/ Azure. Strong understanding of UML and design patterns. Ability to simplify solutions, optimize processes, and efficiently resolve escalated issues. Strong problem-solving skills and a passion for continuous improvement. Excellent communication skills and the ability to collaborate effectively with cross-functional teams. Enthusiasm for learning new technologies and staying updated on industry trends RESPONSIBILITIES: Writing and reviewing great quality code Understanding functional requirements thoroughly and analyzing the client’s needs in the context of the project Envisioning the overall solution for defined functional and non-functional requirements, and being able to define technologies, patterns and frameworks to realize it Determining and implementing design methodologies and tool sets Enabling application development by coordinating requirements, schedules, and activities. Being able to lead/support UAT and production roll outs Creating, understanding and validating WBS and estimated effort for given module/task, and being able to justify it Addressing issues promptly, responding positively to setbacks and challenges with a mindset of continuous improvement Giving constructive feedback to the team members and setting clear expectations. Helping the team in troubleshooting and resolving of complex bugs Coming up with solutions to any issue that is raised during code/design review and being able to justify the decision taken Carrying out POCs to make sure that suggested design/technologies meet the requirements Qualifications Bachelor's or Master's degree in computer science, Information Technology, or a related field. Show more Show less
Posted 4 days ago
8.0 years
6 - 9 Lacs
Hyderābād
On-site
About the Role: Grade Level (for internal use): 11 S&P Global Market Intelligence The Role: Lead Software Engineer The Team: The Market Intelligence Industry Data Solutions business line provides data technology and services supporting acquisition, ingestion, content management, mastering and distribution to power our Financial Institution Group business and customer needs. Focus on platform scale to support business by following a common data lifecycle that accelerates business value. We provide essential intelligence for Financial Services, Real Estate and Insurance industries The Impact: The FIG Data Engineering team will be responsible for implementing & maintaining services and/or tools to support existing feed systems which allows users to consume FIG datasets and make FIG data available to a data fabric for wider consumption and processing within the company. What’s in it for you: Ability to work with global stakeholders and working on latest tools and technologies. Responsibilities: Build new data acquisition and transformation pipelines using big data and cloud technologies. Work with the broader technology team, including information architecture and data fabric teams to align pipelines with the lodestone initiative. What We’re Looking For: Bachelor’s in computer science or equivalent with at least 8+ years of professional software work experience Experience with Big Data platforms such as Apache Spark, Apache Airflow, Google Cloud Platform, Apache Hadoop. Deep understanding of REST, good API design, and OOP principles Experience with object-oriented/object function scripting languages : Python , C#, Scala, etc. Good working knowledge of relational SQL and NoSQL databases Experience in maintaining and developing software in production utilizing cloud-based tooling ( AWS, Docker & Kubernetes, Okta) Strong collaboration and teamwork skills with excellent written and verbal communications skills Self-starter and motivated with ability to work in a fast-paced software development environment Agile experience highly desirable Experience in Snowflake, Databricks will be a big plus . Return to Work Have you taken time out for caring responsibilities and are now looking to return to work? As part of our Return-to-Work initiative (link to career site page when available), we are encouraging enthusiastic and talented returners to apply and will actively support your return to the workplace. About S&P Global Market Intelligence At S&P Global Market Intelligence, a division of S&P Global we understand the importance of accurate, deep and insightful information. Our team of experts delivers unrivaled insights and leading data and technology solutions, partnering with customers to expand their perspective, operate with confidence, and make decisions with conviction. For more information, visit www.spglobal.com/marketintelligence . What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ----------------------------------------------------------- 20 - Professional (EEO-2 Job Categories-United States of America), IFTECH202.2 - Middle Professional Tier II (EEO Job Group), SWP Priority – Ratings - (Strategic Workforce Planning) Job ID: 316183 Posted On: 2025-06-15 Location: Hyderabad, Telangana, India
Posted 4 days ago
10.0 years
0 Lacs
Bengaluru
On-site
Work Schedule Standard (Mon-Fri) Environmental Conditions Office Job Description About Team: Digital Engineering is Thermo Fisher’s “ Software Engineering ” center of excellence. We build cloud computing services, products and Platforms that the scientific community demands. Using the emerging technologies with the potential to significantly change lab workflows and facilitate access to powerful data analysis techniques. Our division serves as an extension of Thermo Fisher’s Software R&D teams while enhancing our responsiveness and attention in enabling our customers to make the world healthier, cleaner and safer. Thermo Fisher was built to serve society and that sense of purpose will continue to enhance innovation and scientific advancement. We are committed to diversity in our workforce and is proud to be an equal opportunity employer We apply industry standard methodologies to the design, development, and deployment of world- class software products built to demonstrate the power and scalability of the cloud. Roles & Responsibilities: Purpose: The Software Staff Engineer provides software and systems architectural and design leadership to multiple Software Development Scrum Teams delivering a Gen AI Solution. As a Lead Developer , this individual will actively contribute to the architecture , design , and development of new features, integrating Generative AI systems to enhance functionality. The role involves working with AI frameworks and models, ensuring seamless integration with existing product offerings. In addition to that, the role also demands to lead, empower, mentor, and provide ample guidance to the other team members with respect to technical challenges. Responsibilities: Provide software and systems architectural and design leadership to team of engineers. Design and implement cloud service and software architecture for new products, and extensions to existing products. Be the team’s “go-to” person for architectural, design and implementation related questions and provide guidance. Active contribution in solution analysis of requirements Able to understand, articulate and challenge the requirements Provides guidance regarding design activity to other programmers on technical aspects relating to the project. Proactively looks for ways and patterns to continuously automate feature testing with measurable and tangible goals Is authentic, transparent and leads by example, holding self and others accountable. Inspires, motivates and collaborates with others Anticipates needs and problems while creating solutions. Is willing to ask difficult questions and do things differently. Greets challenge and change as opportunity A highly motivated fast learner who can self-start and "determine what needs figuring out” Actively participates in development community of practices in sharing and learning standard processes, leads initiatives within the community and involve in other organization initiatives Excellent verbal and written communication skills. Ability to effectively document artifacts and processes then explain them to others Candidate Requirement: Education : Bachelors in Engineering or Masters in Computer Science with 10+ years of extensive experience. Mandatory Skills, Knowledge, and Experience: Python Development Experience: Minimum 6 years of proven experience in Python development, with a strong emphasis on backend development, including creating RESTful APIs, working with libraries like FastAPI for high-performance web services. Generative AI (Gen AI) & OpenAI Integration: Hands-on experience with Generative AI frameworks and APIs, including OpenAI models for generating human-like responses, completing tasks, and automating processes. Knowledge of how to effectively integrate these models into applications. API Development & Integration: Extensive experience in building and maintaining REST APIs using FastAPI , ensuring efficient communication between different services and applications. Familiarity with authentication , authorization , and API rate-limiting . Data Engineering & Processing: Strong skills in data engineering , including data extraction, transformation, and loading (ETL) processes. Expertise in Pandas for data manipulation, analysis, and handling large datasets. LLM (Large Language Model) Prompt Engineering: Experience in prompt engineering for LLMs . Ability to design and optimize prompts for specific use cases to extract relevant, high-quality outputs. Python Data Science Libraries: Strong proficiency in Pandas , NumPy , and other data analysis libraries to process and manipulate large volumes of data. Experience in generating data insights and performing statistical analyses. Version Control & CI/CD: Proficient in using Git for version control and familiar with CI/CD pipelines for automated testing and deployment processes. Scrum and Agile Methodologies: 3+ years of experience with Scrum or Agile-based software development methodologies, with a focus on iterative development and collaboration. Testing and Automation: Experience in unit testing , integration testing , and automated testing using frameworks like pytest and unittest to ensure code quality and reliability. Communication & Documentation: Excellent verbal and written communication skills, capable of detailing code, technical processes, and explaining them to both technical and non-technical collaborators. Non-Functional Requirements (NFR): Experience in defining and implementing Non-Functional Requirements such as performance optimizations, scalability , and security in data-driven applications. Nice to Have Skills, Knowledge, and Experience: Cloud Services & Deployment: Experience with cloud platforms like AWS or GCP , specifically related to data storage , serverless computing , and scalable APIs . Data Pipeline Tools: Familiarity with data pipeline tools like Apache Airflow , Apache Kafka , or similar platforms for managing and orchestrating data workflows. Machine Learning & AI Frameworks: Experience with machine learning libraries such as scikit-learn , TensorFlow , PyTorch , or similar, particularly in building and training models for data-driven applications. Code Quality & Analysis Tools: Experience with SonarQube , ESLint , or similar tools for code quality analysis , ensuring maintainability and scalability of the codebase.
Posted 4 days ago
8.0 years
0 Lacs
Bengaluru
On-site
Overview: Working at Atlassian Atlassians can choose where they work – whether in an office, from home, or a combination of the two. That way, Atlassians have more control over supporting their family, personal goals, and other priorities. We can hire people in any country where we have a legal entity. Interviews and onboarding are conducted virtually, a part of being a distributed-first company. Responsibilities: Team: Core Engineering Reliability Team Collaborate with engineering and TPM leaders, developers, and process engineers to create data solutions that extract actionable insights from incident and post-incident management data, supporting objectives of incident prevention and reducing detection, mitigation, and communication times. Work with diverse stakeholders to understand their needs and design data models, acquisition processes, and applications that meet those requirements. Add new sources, implement business rules, and generate metrics to empower product analysts and data scientists. Serve as the data domain expert, mastering the details of our incident management infrastructure. Take full ownership of problems from ambiguous requirements through rapid iterations. Enhance data quality by leveraging and refining internal tools and frameworks to automatically detect issues. Cultivate strong relationships between teams that produce data and those that build insights. Qualifications: Minimum Qualifications / Your background: BS in Computer Science or equivalent experience with 8+ years as a Senior Data Engineer or similar role 10+ Years of progressive experience in building scalable datasets and reliable data engineering practices. Proficiency in Python, SQL, and data platforms like DataBricks Proficiency in relational databases and query authoring (SQL). Demonstrable expertise designing data models for optimal storage and retrieval to meet product and business requirements. Experience building and scaling experimentation practices, statistical methods, and tools in a large scale organization Excellence in building scalable data pipelines using Spark (SparkSQL) with Airflow scheduler/executor framework or similar scheduling tools. Expert experience working with AWS data services or similar Apache projects (Spark, Flink, Hive, and Kafka). Understanding of Data Engineering tools/frameworks and standards to improve the productivity and quality of output for Data Engineers across the team. Well versed in modern software development practices (Agile, TDD, CICD) Desirable Qualifications Demonstrated ability to design and operate data infrastructure that deliver high reliability for our customers. Familiarity working with datasets like Monitoring, Observability, Performance, etc.. Benefits & Perks Atlassian offers a wide range of perks and benefits designed to support you, your family and to help you engage with your local community. Our offerings include health and wellbeing resources, paid volunteer days, and so much more. To learn more, visit go.atlassian.com/perksandbenefits . About Atlassian At Atlassian, we're motivated by a common goal: to unleash the potential of every team. Our software products help teams all over the planet and our solutions are designed for all types of work. Team collaboration through our tools makes what may be impossible alone, possible together. We believe that the unique contributions of all Atlassians create our success. To ensure that our products and culture continue to incorporate everyone's perspectives and experience, we never discriminate based on race, religion, national origin, gender identity or expression, sexual orientation, age, or marital, veteran, or disability status. All your information will be kept confidential according to EEO guidelines. To provide you the best experience, we can support with accommodations or adjustments at any stage of the recruitment process. Simply inform our Recruitment team during your conversation with them. To learn more about our culture and hiring process, visit go.atlassian.com/crh .
Posted 4 days ago
0 years
4 - 8 Lacs
Bengaluru
On-site
Wipro Limited (NYSE: WIT, BSE: 507685, NSE: WIPRO) is a leading technology services and consulting company focused on building innovative solutions that address clients’ most complex digital transformation needs. Leveraging our holistic portfolio of capabilities in consulting, design, engineering, and operations, we help clients realize their boldest ambitions and build future-ready, sustainable businesses. With over 230,000 employees and business partners across 65 countries, we deliver on the promise of helping our customers, colleagues, and communities thrive in an ever-changing world. For additional information, visit us at www.wipro.com. Job Description SDET Rest Assure ͏ Experience with Selenium on automating web applications using Java. Experience with testing APIs using API testing tools such as Postman. Experience with Rest Assured and automating REST web services. Experience in Cucumber BDD framework for creating test cases. Experience with Github, Maven, TestNg and CICD integration tools like Bamboo. Experience in writing SQL queries and knowledge in accessing DB like Snowflake, Mongo DB. Programming skills in Java. Experience in Agile Scrum methodology Experience with bug and backlog management Experience with Test Case Management systems Experience with surfacing & executing on continuous improvement initiatives within quality teams Experience with ExtentReport, Apache POI, Open CSV, MSSQL-JDBC, Bitbucket, JavaXmail, and/or Exclipse/Intellij a bonus Experience with Jira is a bonus Experience with Pricing and/or in the wholesale food distribution industries is a bonus Experience with quality focused metrics and auditing systems of test is a bonus ͏ ͏ No Performance Parameter Measure 1 Understanding the test requirements and test case design of the product Ensure error free testing solutions, minimum process exceptions, 100% SLA compliance, # of automation done using VB, macros 2 Execute test cases and reporting Testing efficiency & quality, On-Time Delivery, Troubleshoot queries within TAT, CSAT score ͏ Reinvent your world. We are building a modern Wipro. We are an end-to-end digital transformation partner with the boldest ambitions. To realize them, we need people inspired by reinvention. Of yourself, your career, and your skills. We want to see the constant evolution of our business and our industry. It has always been in our DNA - as the world around us changes, so do we. Join a business powered by purpose and a place that empowers you to design your own reinvention. Come to Wipro. Realize your ambitions. Applications from people with disabilities are explicitly welcome.
Posted 4 days ago
5.0 years
10 - 12 Lacs
Bhopal
On-site
About the Role : We are seeking a highly skilled Senior AI/ML Engineer to join our dynamic team. The ideal candidate will have extensive experience in designing, building, and deploying machine learning models and AI solutions to solve real-world business challenges. You will collaborate with cross-functional teams to create and integrate AI/ML models into end-to-end applications, ensuring models are accessible through APIs or product interfaces for real-time usage. Responsibilities Lead the design, development, and deployment of machine learning models for various use cases such as recommendation systems, computer vision, natural language processing (NLP), and predictive analytics. Work with large datasets to build, train, and optimize models using techniques such as classification, regression, clustering, and neural networks. Fine-tune pre-trained models and develop custom models based on specific business needs. Collaborate with data engineers to build scalable data pipelines and ensure the smooth integration of models into production. Collaborate with frontend/backend engineers to build AI-driven features into products or platforms. Build proof-of-concept or production-grade AI applications and tools with intuitive UIs or workflows. Ensure scalability and performance of deployed AI solutions within the full application stack. Implement model monitoring and maintenance strategies to ensure performance, accuracy, and continuous improvement of deployed models. Design and implement APIs or services that expose machine learning models to frontend or other systems Utilize cloud platforms (AWS, GCP, Azure) to deploy, manage, and scale AI/ML solutions. Stay up-to-date with the latest advancements in AI/ML research, and apply innovative techniques to improve existing systems. Communicate effectively with stakeholders to understand business requirements and translate them into AI/ML-driven solutions. Document processes, methodologies, and results for future reference and reproducibility. Required Skills & Qualifications Experience : 5+ years of experience in AI/ML engineering roles, with a proven track record of successfully delivering machine learning projects. AI/ML Expertise : Strong knowledge of machine learning algorithms (supervised, unsupervised, reinforcement learning) and AI techniques, including NLP, computer vision, and recommendation systems. Programming Languages : Proficient in Python and relevant ML libraries such as TensorFlow, PyTorch, Scikit-learn, and Keras. Data Manipulation : Experience with data manipulation libraries such as Pandas, NumPy, and SQL for managing and processing large datasets. Model Development : Expertise in building, training, deploying, and fine-tuning machine learning models in production environments. Cloud Platforms : Experience with cloud platforms such as AWS, GCP, or Azure for the deployment and scaling of AI/ML models. MLOps : Knowledge of MLOps practices for model versioning, automation, and monitoring. Data Preprocessing : Proficient in data cleaning, feature engineering, and preparing datasets for model training. Strong experience building and deploying end-to-end AI-powered applications— not just models but full system integration. Hands-on experience with Flask, FastAPI, Django, or similar for building REST APIs for model serving. Understanding of system design and software architecture for integrating AI into production environments. Experience with frontend/backend integration (basic React/Next.js knowledge is a plus). Demonstrated projects where AI models were part of deployed user-facing applications. NLP & Computer Vision: Hands-on experience with natural language processing or computer vision projects. Big Data: Familiarity with big data tools and frameworks (e.g., Apache Spark, Hadoop) is an advantage. Problem-Solving Skills: Strong analytical and problem-solving abilities, with a focus on delivering practical AI/ML solutions. Nice to Have Experience with deep learning architectures (CNNs, RNNs, GANs, etc.) and techniques. Knowledge of deployment strategies for AI models using APIs, Docker, or Kubernetes. Experience building full-stack applications powered by AI (e.g., chatbots, recommendation dashboards, AI assistants, etc.). Experience deploying AI/ML models in real-time environments using API gateways, microservices, or orchestration tools like Docker and Kubernetes. Solid understanding of statistics and probability. Experience working in Agile development environments. What You'll Gain Be part of a forward-thinking team working on cutting-edge AI/ML technologies. Collaborate with a diverse, highly skilled team in a fast-paced environment. Opportunity to work on impactful projects with real-world applications. Competitive salary and career growth opportunities Job Type: Full-time Pay: ₹1,000,000.00 - ₹1,200,000.00 per year Schedule: Day shift Fixed shift Work Location: In person
Posted 4 days ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Description About CloudxLab CloudxLab is a team of developers, engineers, and educators passionate about building innovative products to make learning fun, engaging, and for life. We are a highly motivated team who build fresh and lasting learning experiences for our users. Powered by our innovation processes, we provide a gamified environment where learning is fun and constructive. From creative design to intuitive apps we create a seamless learning experience for our users. We upskill engineers in deep tech - make them employable & future-ready. CloudxLab is looking for Machine Learning Engineers who have good understanding of Machine Learning using Python. The primary responsibilities of a Machine learning Engineers at CloudxLab are going to be: Review the machine learning and big data projects submitted by the learners. Build the test case driven assessments for machine learning, deep learning, Spark and data analytics. Answer the queries of the learners When CloudxLab is launching the machine learning and big data projects, contribute to it. The Candidate Must Be Hands On With The Following Linux SQL Data Analysis using Numpy, Pandas and matplotlib Machine Learning with Scikit Learn Deep Learning with Tensor flow (1 or 2 either will do) Apache Spark As a part of the job application you will have to complete an online assessment test. Skills The assessment test checks for skills needed for the job and also requires you to submit a blog you have written and published online preferably on LinkedIn. The link to form is below: Link - https://forms.gle/55LbLnufpeK6kqgE8 Required Skills This is your progress in the required skills for this job. Sign in and improve your score by completing these topics and then apply for the job with a better profile. Sign in to know your progress » Apply Now » We suggest you to sign in, to check and improve your progress for the required skills before applying. Click here if you want to apply anyway. Show more Show less
Posted 4 days ago
2.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description Summary You are a passionate and technical client-obsessed Solution Specialist who understands emerging energy industry trends and implications on utilities and their clients. You are an integral part of the team to leverage technical expertise in the planning, requirements management, design, build (including custom software development) and implementation of GE Vernova GridOS DERMS solutions to meet client needs. Job Description What will you as our new DevOps Security Engineer work on? Engage with our utility clients to understand the impacts of distributed energy resources (DERs) on the distribution system by providing our utility clients with solutions to manage and plan for the emergence of more DERs. Work collaboratively with a strong team of Technical Leads and Solution Specialists to provide input on solution architecture based on client needs and solution capabilities. Help define success criteria and contribute to solution diagrams for the project. Engage with our team to deploy GE GridOS DERMS solutions to client environments to support project use cases and DER planning scenarios. Document and design client-specific solution deployment requirements, capturing acceptance criteria and necessary features to meet client business needs. Collaborate with other members of the Solutions team to expand our solution consulting and delivery practice, build standards of excellence, and continuously deliver innovative solution offerings for clients. Collaborate and contribute to identifying project delivery risks, recommending potential mitigation strategies, in collaboration with Project Manager Required Skills Education. Bachelor’s degree in computer science, or engineering , masters degree preferred Experience. 2-5 years of enterprise application development, deployment, and integration experience Java, Microsoft .Net, or other enterprise SW development experience an asset. Strong Hands-on Experience With AWS, Azure, and / or GCP Cloud Platforms and Technologies. RHEL 8+ Linux OS and shell scripting. Puppet, Ansible, Terraform, ArgoCD Docker, Kernel-based VM PostgresSQL, MariaDB, MongoDB Redis, Memcached Nginx, HA Proxy, Apache Python, JSON, YAML, Bash, Git Infrastructure as Code scripting with Terraform, Ansible or cloud native (ARM, Cloud Formation). Monitoring tools for statistical analysis and pro-active system tuning such as DataDog. GitHub, CI/CD pipeline design and scheduling with code quality stages spanning the lifecycle of continuous build & quality, continuous deploy, and continuous testing. Knowledge of cyber security best practices including authentication protocols, ACL rules, and identity management mechanisms. Knowledge of DevSecOps (static, dynamic, artifactory, code scanning). Build and deploy experience of Docker containers using Helm Charts and YAML configurations. Deployment and management of securely hardened technology frameworks including Kafka, and Kubernetes. Pipeline integration with DevOps Tools such as GIT, Docker, K8 containers, SonarQube, Nexus, Vault, test automation, Fortify, and Twistlock, Gitea, and Jenkins. Hands on experience in solution deployment with PDI and ArgoCD technologies. Agile/Lean principles such as SCRUM, Kanban, MVP. Windows and Linux Server, SQL Server, Active Directory, Azure Active Directory, and Infrastructure Automation Tools. Self-starter with the ability to work in a cross functional team-based environment. Excellent analytical skills and must be able to look at situations from different vantage points to make data-driven decisions and solve problems. Knowledge of security best practices including authentication protocols, ACL rules, and Identity Management mechanisms. Experience working with Azure, Windows and Linux Server, SQL Server, Active Directory, Azure Active Directory, Systems, and Infrastructure Automation Tools Excellent analytical skills and must be able to look at situations from different vantage points to make data-driven decisions and solve problems. Knowledge of Node.JS and Microservices. Knowledge on relational and non-relational databases. Triage and debug any issues that arise during testing and production. Automate testing across CI/CD and cloud infrastructure management. Knowledge of secure code development. You have strong understanding of the utilities industry vertical and what Distributed Energy Resource Managed Solutions (DERMS) are all about. Hands on Python enterprise application development Nice To Have Knowledge. You are highly familiar with emerging energy industry trends and implications on utility clients in DER management, coupled with how to manage requirements effectively, how to message the change effectively, and overall, how to manage client expectations. Excellence. You get things done within project deadlines, and with strong focus on quality. Positive attitude and a strong commitment to delivering quality work. Teamwork. You are a natural collaborator and demonstrate a “we before me” attitude. Self-starter with the ability to work in a cross functional team-based environment. Problem Solving. You can quickly understand and analyze various approaches and processes and are able to configure solutions to client needs given existing product functionality. You can drill down to the details, obtaining the right level of specificity for your team. You can creatively solve complex problems. Understanding of how to triage and debug any issues that arise during testing and production. Possesses excellent analytical and problem resolution skills. Communication. Strong written and verbal communication style. Can effectively share complex technical topics with various levels of audience. Growth Mindset. You are deeply curious and love to ask questions. You’re a lifelong learner. Ability and desire to learn different skills outside of their domain of expertise. Client Focus. You enjoy being in front of clients, listening to their needs. You are deeply focused on ensuring their success. You can create powerful user stories detailing the needs of your clients. Innovation. A genuine interest in new tools and technology. You learn new software quickly without extensive documentation or hand holding. High enthusiasm with a sense of urgency to get things done. Additional Information Relocation Assistance Provided: Yes Show more Show less
Posted 4 days ago
10.0 years
0 Lacs
Pune, Maharashtra, India
On-site
About the role: Want to be on a team that full of results-driven individuals who are constantly seeking to innovate? Want to make an impact? At SailPoint, our Data Platform team does just that. SailPoint is seeking a Senior Staff Data/Software Engineer to help build robust data ingestion and processing system to power our data platform. This role is a critical bridge between teams. It requires excellent organization and communication as the coordinator of work across multiple engineers and projects. We are looking for well-rounded engineers who are passionate about building and delivering reliable, scalable data pipelines. This is a unique opportunity to build something from scratch but have the backing of an organization that has the muscle to take it to market quickly, with a very satisfied customer base. Responsibilities : Spearhead the design and implementation of ELT processes, especially focused on extracting data from and loading data into various endpoints, including RDBMS, NoSQL databases and data-warehouses. Develop and maintain scalable data pipelines for both stream and batch processing leveraging JVM based languages and frameworks. Collaborate with cross-functional teams to understand diverse data sources and environment contexts, ensuring seamless integration into our data ecosystem. Utilize AWS service-stack wherever possible to implement lean design solutions for data storage, data integration and data streaming problems. Develop and maintain workflow orchestration using tools like Apache Airflow. Stay abreast of emerging technologies in the data engineering space, proactively incorporating them into our ETL processes. Organize work from multiple Data Platform teams and customers with other Data Engineers Communicate status, progress and blockers of active projects to Data Platform leaders Thrive in an environment with ambiguity, demonstrating adaptability and problem-solving skills. Qualifications : BS in computer science or a related field. 10+ years of experience in data engineering or related field. Demonstrated system-design experience orchestrating ELT processes targeting data Excellent communication skills Demonstrated ability to internalize business needs and drive execution from a small team Excellent organization of work tasks and status of new and in flight tasks including impact analysis of new work Strong understanding of python Good understanding of Java Strong understanding of SQL and data modeling Familiarity with airflow Hands-on experience with at least one streaming or batch processing framework, such as Flink or Spark. Hands-on experience with containerization platforms such as Docker and container orchestration tools like Kubernetes. Proficiency in AWS service stack. Experience with DBT, Kafka, Jenkins and Snowflake. Experience leveraging tools such as Kustomize, Helm and Terraform for implementing infrastructure as code. Strong interest in staying ahead of new technologies in the data engineering space. Comfortable working in ambiguous team-situations, showcasing adaptability and drive in solving novel problems in the data-engineering space. Preferred Experience with AWS Experience with Continuous Delivery Experience instrumenting code for gathering production performance metrics Experience in working with a Data Catalog tool ( Ex: Atlan ) What success looks like in the role Within the first 30 days you will: Onboard into your new role, get familiar with our product offering and technology, proactively meet peers and stakeholders, set up your test and development environment. Seek to deeply understand business problems or common engineering challenges Learn the skills and abilities of your teammates and align expertise with available work By 90 days: Proactively collaborate on, discuss, debate and refine ideas, problem statements, and software designs with different (sometimes many) stakeholders, architects and members of your team. Increasing team velocity and showing contribution to improving maturation and delivery of Data Platform vision. By 6 months: Collaborates with Product Management and Engineering Lead to estimate and deliver small to medium complexity features more independently. Occasionally serve as a debugging and implementation expert during escalations of systems issues that have evaded the ability of less experienced engineers to solve in a timely manner. Share support of critical team systems by participating in calls with customers, learning the characteristics of currently running systems, and participating in improvements. Engaging with team members. Providing them with challenging work and building cross skill expertise Planning project support and execution with peers and Data Platform leaders SailPoint is an equal opportunity employer and we welcome all qualified candidates to apply to join our team. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, protected veteran status, or any other category protected by applicable law. Alternative methods of applying for employment are available to individuals unable to submit an application through this site because of a disability. Contact hr@sailpoint.com or mail to 11120 Four Points Dr, Suite 100, Austin, TX 78726, to discuss reasonable accommodations. Show more Show less
Posted 4 days ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
It's fun to work in a company where people truly BELIEVE in what they are doing! We're committed to bringing passion and customer focus to the business. Design and develop data-ingestion frameworks, real-time processing solutions, and data processing and transformation frameworks leveraging open source tools and data processing frameworks. Hands-on on technologies such as Kafka, Apache Spark (SQL, Scala, Java), Python, Hadoop Platform, Hive, Presto, Druid, airflow Deep understanding of BigQuery architecture, best practices, and performance optimization. Proficiency in LookML for building data models and metrics. Experience with DataProc for running Hadoop/ Spark jobs on GCP. Knowledge of configuring and optimizing DataProc clusters. Offer system support as part of a support rotation with other team members. Operationalize open source data-analytic tools for enterprise use. Ensure data governance policies are followed by implementing or validating data lineage, quality checks, and data classification. Understand and follow the company development lifecycle to develop, deploy and deliver the solutions. Minimum Qualifications Bachelor's degree in Computer Science, CIS, or related field Experience on project(s) involving the implementation of software development life cycles (SDLC) GCP DATA ENGINEER If you like wild growth and working with happy, enthusiastic over-achievers, you'll enjoy your career with us! Not the right fit? Let us know you're interested in a future opportunity by clicking Introduce Yourself in the top-right corner of the page or create an account to set up email alerts as new job postings become available that meet your interest! Show more Show less
Posted 4 days ago
5.0 - 8.0 years
0 Lacs
Pune, Maharashtra, India
On-site
We are seeking a highly experienced Senior Data Software Engineer to join our dynamic team and tackle challenging projects that will enhance your skills and career. As a Senior Engineer, your contributions will be critical in designing and implementing Data solutions across a variety of projects. The ideal candidate will possess deep experience in Big Data and associated technologies, with a strong emphasis on Apache Spark, Python, Azure and AWS. Responsibilities Develop and execute end-to-end Data solutions to meet complex business needs Work collaboratively with interdisciplinary teams to comprehend project needs and deliver superior software solutions Apply your expertise in Apache Spark, Python, Azure and AWS to create scalable and efficient data processing systems Maintain and enhance the performance, security, and scalability of Data applications Keep abreast of industry trends and technological advancements to foster continuous improvement in our development practices Requirements 5-8 years of direct experience in Data and related technologies Advanced knowledge and hands-on experience with Apache Spark High-level proficiency with Hadoop and Hive Proficiency in Python Prior experience with AWS and Azure native Cloud data services Technologies Hadoop Hive Show more Show less
Posted 4 days ago
5.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Job Description We have an opportunity to impact your career and provide an adventure where you can push the limits of what's possible. As a Lead Software Engineer at JPMorgan Chase within the Corporate Technology - Wholesale Credit Risk team, you are an integral part of an agile team that works to enhance, build, and deliver trusted market-leading technology products in a secure, stable, and scalable way. As a core technical contributor, you are responsible for conducting critical technology solutions across multiple technical areas within various business functions in support of the firm’s business objectives. Job Responsibilities Executes creative software solutions, design, development, and technical troubleshooting with ability to think beyond routine or conventional approaches to build solutions or break down technical problems Develops secure high-quality production code, and reviews and debugs code written by others Identifies opportunities to eliminate or automate remediation of recurring issues to improve overall operational stability of software applications and systems Leads evaluation sessions with external vendors, startups, and internal teams to drive outcomes-oriented probing of architectural designs, technical credentials, and applicability for use within existing systems and information architecture Leads communities of practice across Software Engineering to drive awareness and use of new and leading-edge technologies Adds to team culture of diversity, equity, inclusion, and respect Required Qualifications, Capabilities, And Skills Formal training or certification on software engineering concepts and 5+ years applied experience Hands-on practical experience delivering system design, application development, testing, and operational stability Advanced in one or more programming language(s), Java Proficiency in automation and continuous delivery methods Proficient in all aspects of the Software Development Life Cycle Advanced understanding of agile methodologies such as CI/CD, Application Resiliency, and Security Demonstrated proficiency in software applications and technical processes within a technical discipline (e.g., cloud, artificial intelligence, machine learning, mobile, etc.) In-depth knowledge of the financial services industry and their IT systems Excellent analytical and problem solving skills Practical cloud native experience Preferred Qualifications, Capabilities, And Skills Proficient in building containerized (K8s) software applications using Java 17/21, Spring Framework, ORM and Caching Hands-on experience with SQL and NoSQL databases, Apache Spark, Kafka in Kubernetes environment Familiarity with modern front-end technologies ABOUT US JPMorganChase, one of the oldest financial institutions, offers innovative financial solutions to millions of consumers, small businesses and many of the world’s most prominent corporate, institutional and government clients under the J.P. Morgan and Chase brands. Our history spans over 200 years and today we are a leader in investment banking, consumer and small business banking, commercial banking, financial transaction processing and asset management. We offer a competitive total rewards package including base salary determined based on the role, experience, skill set and location. Those in eligible roles may receive commission-based pay and/or discretionary incentive compensation, paid in the form of cash and/or forfeitable equity, awarded in recognition of individual achievements and contributions. We also offer a range of benefits and programs to meet employee needs, based on eligibility. These benefits include comprehensive health care coverage, on-site health and wellness centers, a retirement savings plan, backup childcare, tuition reimbursement, mental health support, financial coaching and more. Additional details about total compensation and benefits will be provided during the hiring process. We recognize that our people are our strength and the diverse talents they bring to our global workforce are directly linked to our success. We are an equal opportunity employer and place a high value on diversity and inclusion at our company. We do not discriminate on the basis of any protected attribute, including race, religion, color, national origin, gender, sexual orientation, gender identity, gender expression, age, marital or veteran status, pregnancy or disability, or any other basis protected under applicable law. We also make reasonable accommodations for applicants’ and employees’ religious practices and beliefs, as well as mental health or physical disability needs. Visit our FAQs for more information about requesting an accommodation. JPMorgan Chase & Co. is an Equal Opportunity Employer, including Disability/Veterans Show more Show less
Posted 4 days ago
0 years
0 Lacs
Bengaluru, Karnataka, India
Remote
Roles and Responsibilities This role involves architect, designing end-to-end solutions and a blueprint for implementing/deployment of solutions based on Google Cloud Platform (GCP) to enhance operational efficiency and drive digital and Data transformation within the industrial & government sectors. Architect and Design scalable, secure, and cost-effective cloud solutions on GCP tailored to the needs of industrial & government sectors focused on Industrial Data lake and Analytics Provide expert-level guidance on cloud adoption, data migration strategies, and digital transformation projects specific to industrial & government sectors. Develop blueprints of implementation and deployment of GCP services, including compute, storage, networking, and data analytics, ensuring alignment with best practices and client requirements. Deliver technical and pre-sales services on the GCP cloud platform. Services and activities ranging from Data migration, End-to-End data pipelines, Data management, Sizing & Provisioning, and Implementation. Job Scope Full-time Cloud (GCP) Specialist/Consultant with a strong background in Make this secondary skillPresales and Technical services on the GCP Cloud platform. Develop scalable and secure data lake and data warehouse architectures using services like BigQuery, Cloud Storage, Dataproc, and Pub/Sub. But not limited to. Design and implement efficient data ingestion, transformation, and processing pipelines using Cloud Dataflow, Apache Beam, and Dataproc/Spark.But not limited to Design and implement data security and governance using IAM, VPC Service Controls, Cloud DLP, and integrate with data governance tools (e.g., Data Catalog). Building Technical solution proposals with solution architectures, sizing, and planning for prospective bids in the space of the GCP cloud platform. Grooming & Training internal and external technical & non-technical users of solutions and components in the GCP cloud platform. Create prototypes and demonstrate solutions on GCP platform (streaming ingestions, analytical products etc). Job Location - Bangalore Work model - Hybrid (Sunday (Work From Home) - Thursday (Work from office) Show more Show less
Posted 5 days ago
8.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Overview Working at Atlassian Atlassians can choose where they work – whether in an office, from home, or a combination of the two. That way, Atlassians have more control over supporting their family, personal goals, and other priorities. We can hire people in any country where we have a legal entity. Interviews and onboarding are conducted virtually, a part of being a distributed-first company. Responsibilities Partner with Data Science, Product Manager, Analytics, and Business teams to review and gather the data/reporting/analytics requirements and build trusted and scalable data models, data extraction processes, and data applications to help answer complex questions. Design and implement data pipelines to ETL data from multiple sources into a central data warehouse. Design and implement real-time data processing pipelines using Apache Spark Streaming. Improve data quality by leveraging internal tools/frameworks to automatically detect and mitigate data quality issues. Develop and implement data governance procedures to ensure data security, privacy, and compliance. Implement new technologies to improve data processing and analysis. Coach and mentor junior data engineers to enhance their skills and foster a collaborative team environment. Qualifications A BE in Computer Science or equivalent with 8+ years of professional experience as a Data Engineer or in a similar role Experience building scalable data pipelines in Spark using Airflow scheduler/executor framework or similar scheduling tools. Experience with Databricks and its APIs. Experience with modern databases (Redshift, Dynamo DB, Mongo DB, Postgres or similar) and data lakes. Proficient in one or more programming languages such as Python/Scala and rock-solid SQL skills. Champion automated builds and deployments using CICD tools like Bitbucket, Git Experience working with large-scale, high-performance data processing systems (batch and streaming) Our perks & benefits Atlassian offers a variety of perks and benefits to support you, your family and to help you engage with your local community. Our offerings include health coverage, paid volunteer days, wellness resources, and so much more. Visit go.atlassian.com/perksandbenefits to learn more. About Atlassian At Atlassian, we're motivated by a common goal: to unleash the potential of every team. Our software products help teams all over the planet and our solutions are designed for all types of work. Team collaboration through our tools makes what may be impossible alone, possible together. We believe that the unique contributions of all Atlassians create our success. To ensure that our products and culture continue to incorporate everyone's perspectives and experience, we never discriminate based on race, religion, national origin, gender identity or expression, sexual orientation, age, or marital, veteran, or disability status. All your information will be kept confidential according to EEO guidelines. To provide you the best experience, we can support with accommodations or adjustments at any stage of the recruitment process. Simply inform our Recruitment team during your conversation with them. To learn more about our culture and hiring process, visit go.atlassian.com/crh . Show more Show less
Posted 5 days ago
10.0 years
0 Lacs
India
On-site
Job Title: Oracle SOA Administrator Summary: We seek an experienced Oracle SOA 12c administrator to manage and maintain our SOA infrastructure, including WebLogic Server, OSB, and related technologies. This role requires strong troubleshooting, deployment, and monitoring skills. Responsibilities: Administer Oracle SOA Suite 12c and WebLogic Server. Deploy, monitor, and troubleshoot SOA applications. Manage JMS resources and data sources. Use OEM for monitoring and alerting. Automate deployments with tools like Jenkins and MyST. Resolve application issues. Execute SQL as needed. Manage SSL certificates. Provide 24/7 support. Document processes. Participate in change management. Monitor server health. Experience with Apache/OHS is a plus. Other application server experience is beneficial. Qualifications: 10+ years' experience with Oracle SOA Suite 12c and WebLogic Server. Strong troubleshooting and deployment skills. Proficient with OEM and scripting (WLST, Unix). Experience with Jenkins and MyST preferred. Excellent communication skills. Database administration experience a plus. Education: Bachelor's degree in Computer Science or related field. Show more Show less
Posted 5 days ago
10.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Responsibilities: Design, develop, and maintain both front-end and back-end components of web applications Write clean, efficient, and maintainable code using languages such as JavaScript, HTML5, jQuery, React, Python, or Nodejs Build front-end applications through appealing visual design Develop and manage databases, ensuring data integrity and security Create and maintain RESTful and GraphQL APIs Implement JWT and OAuth for secure authentication and authorization Implement automated testing frameworks and conduct thorough testing Manage the deployment process, including CI/CD pipelines Work with development teams and product managers to create efficient software solutions Lead and mentor junior developers, providing guidance and support Oversee the entire software development lifecycle from conception to deployment. Good to have: Bachelor’s degree or higher in Computer Science or a related field Prior 10+ years of experience as a Full Stack Developer or similar role Experience developing web and mobile applications Experience with version control systems like Git Proficient in multiple front-end languages and libraries (e.g. HTML/ CSS, JavaScript, XML, jQuery, ReactJs, Angular, ASP.NET) Proficient in multiple back-end languages (e.g. C#, Python, .NET Core) and JavaScript frameworks (e.g. Node.js, Django) Knowledge of databases (e.g. SQL, MySQL, MongoDB), web servers (e.g. Apache), UI/UX design Experience with cloud platforms such as AWS, Azure, or Google Cloud Familiarity with containerization (Docker) and orchestration (Kubernetes) Understanding of software development principles and best practices Conduct regular code reviews to ensure code quality and adherence to standards Ability to work efficiently in a collaborative team environment Excellent problem-solving and analytical skills Experience with other JavaScript frameworks and libraries (e.g., Angular, Vue.js) Knowledge of DevOps practices and tools like Azure CI/CD, Jenkins, or GitLab CI Familiarity with data warehousing and ETL processes Experience with microservices architecture Show more Show less
Posted 5 days ago
10.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Responsibilities: Design, develop, and maintain both front-end and back-end components of web applications Write clean, efficient, and maintainable code using languages such as JavaScript, HTML5, jQuery, React, Python, or Nodejs Build front-end applications through appealing visual design Develop and manage databases, ensuring data integrity and security Create and maintain RESTful and GraphQL APIs Implement JWT and OAuth for secure authentication and authorization Implement automated testing frameworks and conduct thorough testing Manage the deployment process, including CI/CD pipelines Work with development teams and product managers to create efficient software solutions Lead and mentor junior developers, providing guidance and support Oversee the entire software development lifecycle from conception to deployment. Good to have: Bachelor’s degree or higher in Computer Science or a related field Prior 10+ years of experience as a Full Stack Developer or similar role Experience developing web and mobile applications Experience with version control systems like Git Proficient in multiple front-end languages and libraries (e.g. HTML/ CSS, JavaScript, XML, jQuery, ReactJs, Angular, ASP.NET) Proficient in multiple back-end languages (e.g. C#, Python, .NET Core) and JavaScript frameworks (e.g. Node.js, Django) Knowledge of databases (e.g. SQL, MySQL, MongoDB), web servers (e.g. Apache), UI/UX design Experience with cloud platforms such as AWS, Azure, or Google Cloud Familiarity with containerization (Docker) and orchestration (Kubernetes) Understanding of software development principles and best practices Conduct regular code reviews to ensure code quality and adherence to standards Ability to work efficiently in a collaborative team environment Excellent problem-solving and analytical skills Experience with other JavaScript frameworks and libraries (e.g., Angular, Vue.js) Knowledge of DevOps practices and tools like Azure CI/CD, Jenkins, or GitLab CI Familiarity with data warehousing and ETL processes Experience with microservices architecture Show more Show less
Posted 5 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Apache is a widely used software foundation that offers a range of open-source software solutions. In India, the demand for professionals with expertise in Apache tools and technologies is on the rise. Job seekers looking to pursue a career in Apache-related roles have a plethora of opportunities in various industries. Let's delve into the Apache job market in India to gain a better understanding of the landscape.
These cities are known for their thriving IT sectors and see a high demand for Apache professionals across different organizations.
The salary range for Apache professionals in India varies based on experience and skill level. - Entry-level: INR 3-5 lakhs per annum - Mid-level: INR 6-10 lakhs per annum - Experienced: INR 12-20 lakhs per annum
In the Apache job market in India, a typical career path may progress as follows: 1. Junior Developer 2. Developer 3. Senior Developer 4. Tech Lead 5. Architect
Besides expertise in Apache tools and technologies, professionals in this field are often expected to have skills in: - Linux - Networking - Database Management - Cloud Computing
As you embark on your journey to explore Apache jobs in India, it is essential to stay updated on the latest trends and technologies in the field. By honing your skills and preparing thoroughly for interviews, you can position yourself as a competitive candidate in the Apache job market. Stay motivated, keep learning, and pursue your dream career with confidence!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.