Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 10.0 years
3 - 8 Lacs
Bengaluru
Work from Office
Responsibilities: Develop and maintain functional and stable web applications. Write clean code to develop functional web applications. Troubleshoot and debug applications. Analyze user needs and software requirements to determine the feasibility of design within time and cost constraints. Use Laravel PHP framework to build, test, and maintain web applications. Identify and correct bottlenecks and fix bugs to optimize application performance. Develop and maintain efficient, reusable, and reliable PHP code. Create and maintain technical documentation. Collaborate with the team to design and launch new features. Ensure the best possible performance, quality, and responsiveness of the applications. Contribute to all phases of the development lifecycle. Provide technical support to clients or teams within the organization. Requirements: 5+ years of experience in development and implementation of web applications using Laravel PHP framework. Proficient in building SEO friendly and interactive websites. Strong experience in database schema design. Proficient in frontend technologies like HTML, CSS, jQuery, Bootstrap. Hands-on experience in GIT version control system and JIRA for requirement tracking. Good written and verbal communication skills and ability to handle multiple assignments individually/with the team. Experience in developing custom modules. Ability to troubleshoot application issues in a complex production environment. Knowledge of MySQL databases.
Posted 1 week ago
9.0 - 14.0 years
30 - 40 Lacs
Hyderabad
Hybrid
Role & responsibilities Must-Have Skills: Expertise in Microservices Architecture and service decomposition strategies. Strong experience with .NET Core, Web API, and RESTful service design . Proficiency in Clean Architecture, CQRS, and Event-Driven Architecture . Hands-on experience with Apache Kafka for message-based communication . Deep understanding of Design Principles, SOLID, and design patterns . Experience in asynchronous programming , Dependency Injection , and middleware development. Familiarity with Entity Framework, LINQ, PostgreSQL, and MongoDB. Experience with Redis or similar caching mechanisms in distributed environments. Knowledge of CI/CD processes using tools such as Azure DevOps, GitHub Actions, etc. Implementation knowledge of API versioning and HTTP standards. Strong understanding of a uthentication/authorization using OAuth, JWT , etc. Proficient in writing unit tests and conducting peer code reviews . Preferred candidate profile .NET Core(6), .NET Framework(5), ADO.NET(5), Web API(5 ) Microservices(Authentication, Communication, Consistency, logging), CQRS, Pro in Web API building(Data validations, logging, Asynchronous), Design Principles(What are the design principles they follow as part of daily development), Architectures(Clean, Event-Driven, etc) knowledge, design patterns and Architectual patterns. Interested candidates may apply and can share the required details below- Full name as per the Aadhar card- Total Exp- CTC- Exp CTC- Organization- Notice Period/ Last working day- Offer in hand- Y/N If Yes, Offer amount- ? Preferred work location- Willing to work in shift timing- 4:00 PM-12:30 AM (2-3 hrs work from office) in hybrid mode
Posted 1 week ago
3.0 years
0 Lacs
New Delhi, Delhi, India
On-site
Web Developer (Junior role) About Organization - Finovista is a leading global Service Provider based in India, specializing in Technical Assistance, Program Management, Capacity Building, and In-country Representation in key development sector. Our expertise lies in sectors crucial to sustainability and global development, including climate change, energy, clean cooking, rural technologies, advanced manufacturing, and climate finance. With a strong commitment to delivering innovative solutions and driving positive change, we have a proven track record of successful program management and implementation. Our extensive partnerships extend to Development Agencies, Governments, Universities, Business Chambers, Corporates, Startups, and SMEs. Finovista is implementing the Technology Development Fund (TDF) Scheme in collaboration with DRDO to promote indigenous defence technologies, especially supporting Indian MSMEs and startups. Also, they have multiple international projects in the area of science and clean energy. About the Project- This role supports the Technology Development Fund (TDF) scheme of DRDO, executed by the Directorate of Technology Development Fund. The initiative focuses on strengthening the defence technology development ecosystem by enabling MSMEs and startups through grants and structured support. candidate selected will be part of the TDF Desk managed by Finovista, working directly on technology management, website operations, stakeholder outreach, and related IT functions. Job Location – New Delhi Note: As DRDO HQ is a high security zone, selected candidates will need to get a Police Clearance Certificate upon joining Finovista. Key Responsibilities • Support the design, development, and maintenance of the TDF website. • Ensure functional, accessible, and user-friendly web interfaces. • Assist in enhancing features such as website analytics, blogs, admin dashboard tools, project management components, and compliance. • Develop and maintain web applications using, PHP, and Laravel. • Hands-on experience with Laravel, along with Core PHP, CakePHP, CodeIgniter, HTML5, CSS3, JavaScript, jQuery, AJAX, and Bootstrap • Assist in custom module integration. • Help manage MySQL/MariaDB databases, ensuring query efficiency and data security. • Support integration of APIs and third-party services. • Conduct routine server checks and assist with backup, monitoring, and security protocols (Linux/Ubuntu/Apache stack). • Coordinate with the NIC team for server hosting, website hosting compliance (including CERT-IN guidelines, if applicable), and necessary upgrades. Qualifications • Graduate/Postgraduate in Engineering or IT (preferably in CS/IT, ECE, Electrical, or related streams) with minimum 60% marks. • 6 months to 3 years of relevant experience in website development and management. • Hands-on experience with: PHP (Core, Laravel, CakePHP, CI), HTML5, CSS3, JavaScript, jQuery, AJAX, Bootstrap. • Familiarity with Linux/Ubuntu, Apache servers, and basic server administration. • Exposure to website audits, security protocols, and compliance standards like CERT-IN is a plus. • Good communication skills and ability to work in coordination with stakeholders. Kindly share the resume at career@finovista.com with statement of suitability/position applied for, current CTC, expected CTC, current location and Notice period. Only shortlisted candidates will be called for an in-person interaction. Prefers candidates staying at New Delhi or nearby.
Posted 1 week ago
9.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Position Overview Job Title: Senior Engineer (JAVA) – Home Loan Savings, AVP Location: Pune, India Role Description Our Home Loan Savings teams at Deutsche Bank - Private Bank (TDI) develop and maintain applications for home loan savings business of Private Customers (BHW). Changes are implemented on time to market challenges as well as on development of the application landscape by using Google Cloud technology. In addition to the SAP-based home loan savings and mortgage lending core systems the application portfolio also includes the business partner data systems, the connection to payment transactions, as well as the interface to the frontends and the data preparation and delivery for the banks dispositive systems. As a Deployment Specialist / System Integrator you will be part of the development team and work closely together with production and operation units as well as business areas. You bring deployment, configuration, and development skills to enforce the development team within a Squad. You will extensively make use and apply Continuous Integration tools in the context of Deutsche Bank’s digitalization journey and contributing to the success of the growing domain Home Loan Savings. What We’ll Offer You As part of our flexible scheme, here are just some of the benefits that you’ll enjoy Best in class leave policy Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your Key Responsibilities You are involved in the whole Software Deployment and Integration Lifecycle starting from analyzing infrastructure requirements, deploying, and testing software as well as maintaining & continuously improving it. Your primary focus will be on configuration and change management, health monitoring and utilization of systems including trending, performance analysis and reporting. You configure and manage DR replication to ensure the ability to recover systems during outage events rapidly. You are part of 3rd level support in case of occurring incidents, problem management and monitoring of Midrange environments (RHEL, Cloud) You collaborate with other team members, 3rd parties and vendors to achieve the sprint objectives. You actively participate and contribute into the sprint activities and ceremonies. Your Skills And Experience Deep technology skills on x86 and Google Platform in Apache, Tomcat & Oracle database from the integration perspective. Knowledge in Midrange-Infrastructure (virtualization concept, implementation, monitoring) and agile development frameworks (e.g. Bit-bucket, Teamcity, Artifactory, Jira, UCD) Experience in deployment of applications in midrange (Linux) and cloud environments. At least 9 years of in-depth development in at least one development environment. Experience in batch processing and job scheduling systems, cloud technology (e.g. Openshift4-Container) Experience in monitoring and troubleshooting application performance with demonstrated ability to identify, research and analyze technical problems and recommend solutions. Pro-active team player with good communication and English language skills How We’ll Support You Training and development to help you excel in your career Coaching and support from experts in your team A culture of continuous learning to aid progression A range of flexible benefits that you can tailor to suit your needs About Us And Our Teams Please visit our company website for further information: https://www.db.com/company/company.htm We strive for a culture in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively. Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment.
Posted 1 week ago
7.0 years
8 - 9 Lacs
Thiruvananthapuram
On-site
7 - 9 Years 4 Openings Trivandrum Role description Role Proficiency: This role requires proficiency in developing data pipelines including coding and testing for ingesting wrangling transforming and joining data from various sources. The ideal candidate should be adept in ETL tools like Informatica Glue Databricks and DataProc with strong coding skills in Python PySpark and SQL. This position demands independence and proficiency across various data domains. Expertise in data warehousing solutions such as Snowflake BigQuery Lakehouse and Delta Lake is essential including the ability to calculate processing costs and address performance issues. A solid understanding of DevOps and infrastructure needs is also required. Outcomes: Act creatively to develop pipelines/applications by selecting appropriate technical options optimizing application development maintenance and performance through design patterns and reusing proven solutions. Support the Project Manager in day-to-day project execution and account for the developmental activities of others. Interpret requirements create optimal architecture and design solutions in accordance with specifications. Document and communicate milestones/stages for end-to-end delivery. Code using best standards debug and test solutions to ensure best-in-class quality. Tune performance of code and align it with the appropriate infrastructure understanding cost implications of licenses and infrastructure. Create data schemas and models effectively. Develop and manage data storage solutions including relational databases NoSQL databases Delta Lakes and data lakes. Validate results with user representatives integrating the overall solution. Influence and enhance customer satisfaction and employee engagement within project teams. Measures of Outcomes: TeamOne's Adherence to engineering processes and standards TeamOne's Adherence to schedule / timelines TeamOne's Adhere to SLAs where applicable TeamOne's # of defects post delivery TeamOne's # of non-compliance issues TeamOne's Reduction of reoccurrence of known defects TeamOne's Quickly turnaround production bugs Completion of applicable technical/domain certifications Completion of all mandatory training requirementst Efficiency improvements in data pipelines (e.g. reduced resource consumption faster run times). TeamOne's Average time to detect respond to and resolve pipeline failures or data issues. TeamOne's Number of data security incidents or compliance breaches. Outputs Expected: Code: Develop data processing code with guidance ensuring performance and scalability requirements are met. Define coding standards templates and checklists. Review code for team and peers. Documentation: Create/review templates checklists guidelines and standards for design/process/development. Create/review deliverable documents including design documents architecture documents infra costing business requirements source-target mappings test cases and results. Configure: Define and govern the configuration management plan. Ensure compliance from the team. Test: Review/create unit test cases scenarios and execution. Review test plans and strategies created by the testing team. Provide clarifications to the testing team. Domain Relevance: Advise data engineers on the design and development of features and components leveraging a deeper understanding of business needs. Learn more about the customer domain and identify opportunities to add value. Complete relevant domain certifications. Manage Project: Support the Project Manager with project inputs. Provide inputs on project plans or sprints as needed. Manage the delivery of modules. Manage Defects: Perform defect root cause analysis (RCA) and mitigation. Identify defect trends and implement proactive measures to improve quality. Estimate: Create and provide input for effort and size estimation and plan resources for projects. Manage Knowledge: Consume and contribute to project-related documents SharePoint libraries and client universities. Review reusable documents created by the team. Release: Execute and monitor the release process. Design: Contribute to the creation of design (HLD LLD SAD)/architecture for applications business components and data models. Interface with Customer: Clarify requirements and provide guidance to the Development Team. Present design options to customers. Conduct product demos. Collaborate closely with customer architects to finalize designs. Manage Team: Set FAST goals and provide feedback. Understand team members' aspirations and provide guidance and opportunities. Ensure team members are upskilled. Engage the team in projects. Proactively identify attrition risks and collaborate with BSE on retention measures. Certifications: Obtain relevant domain and technology certifications. Skill Examples: Proficiency in SQL Python or other programming languages used for data manipulation. Experience with ETL tools such as Apache Airflow Talend Informatica AWS Glue Dataproc and Azure ADF. Hands-on experience with cloud platforms like AWS Azure or Google Cloud particularly with data-related services (e.g. AWS Glue BigQuery). Conduct tests on data pipelines and evaluate results against data quality and performance specifications. Experience in performance tuning. Experience in data warehouse design and cost improvements. Apply and optimize data models for efficient storage retrieval and processing of large datasets. Communicate and explain design/development aspects to customers. Estimate time and resource requirements for developing/debugging features/components. Participate in RFP responses and solutioning. Mentor team members and guide them in relevant upskilling and certification. Knowledge Examples: Knowledge Examples Knowledge of various ETL services used by cloud providers including Apache PySpark AWS Glue GCP DataProc/Dataflow Azure ADF and ADLF. Proficient in SQL for analytics and windowing functions. Understanding of data schemas and models. Familiarity with domain-related data. Knowledge of data warehouse optimization techniques. Understanding of data security concepts. Awareness of patterns frameworks and automation practices. Additional Comments: We are seeking a highly experienced Senior Data Engineer to design, develop, and optimize scalable data pipelines in a cloud-based environment. The ideal candidate will have deep expertise in PySpark, SQL, Azure Databricks, and experience with either AWS or GCP. A strong foundation in data warehousing, ELT/ETL processes, and dimensional modeling (Kimball/star schema) is essential for this role. Must-Have Skills 8+ years of hands-on experience in data engineering or big data development. Strong proficiency in PySpark and SQL for data transformation and pipeline development. Experience working in Azure Databricks or equivalent Spark-based cloud platforms. Practical knowledge of cloud data environments – Azure, AWS, or GCP. Solid understanding of data warehousing concepts, including Kimball methodology and star/snowflake schema design. Proven experience designing and maintaining ETL/ELT pipelines in production. Familiarity with version control (e.g., Git), CI/CD practices, and data pipeline orchestration tools (e.g., Airflow, Azure Data Factory Skills Azure Data Factory,Azure Databricks,Pyspark,Sql About UST UST is a global digital transformation solutions provider. For more than 20 years, UST has worked side by side with the world’s best companies to make a real impact through transformation. Powered by technology, inspired by people and led by purpose, UST partners with their clients from design to operation. With deep domain expertise and a future-proof philosophy, UST embeds innovation and agility into their clients’ organizations. With over 30,000 employees in 30 countries, UST builds for boundless impact—touching billions of lives in the process.
Posted 1 week ago
2.0 years
9 - 9 Lacs
Thiruvananthapuram
On-site
Equifax is seeking creative, high-energy and driven software engineers with hands-on development skills to work on a variety of meaningful projects. Our software engineering positions provide you the opportunity to join a team of talented engineers working with leading-edge technology. You are ideal for this position if you are a forward-thinking, committed, and enthusiastic software engineer who is passionate about technology. What you’ll do? Perform general application development activities, including unit testing, code deployment to development environment and technical documentation. Works on one or more projects, making contributions to unfamiliar code written by team members. Participates in estimation process, use case specifications, reviews of test plans and test cases, requirements, and project planning. Diagnose and resolve performance issues. Documents code/processes so that any other developer is able to dive in with minimal effort. Develop, and operate high scale applications from the backend to UI layer, focusing on operational excellence, security and scalability. Apply modern software development practices (serverless computing, microservices architecture, CI/CD, infrastructure-as-code, etc.). Work across teams to integrate our systems with existing internal systems, Data Fabric, CSA Toolset. Participate in technology roadmap and architecture discussions to turn business requirements and vision into reality. Participate in a tight-knit engineering team employing agile software development practices. Triage product or system issues and debug/track/resolve by analyzing the sources of issues and the impact on network, or service operations and quality. Able to write, debug, and troubleshoot code in mainstream open source technologies. Lead effort for Sprint deliverables, and solve problems with medium complexity What experience you need Bachelor's degree or equivalent experience 2+ years experience working with software design and Java, Python and Javascript programming languages and SQL 2+ years experience with software build management tools like Maven or Gradle 2+ years experience with HTML, CSS and frontend/web development 2+ years experience with software testing, performance, and quality engineering techniques and strategies 2+ years experience with Cloud technology: GCP, AWS, or Azure What could set you apart Knowledge or experience with Apache Beam for stream and batch data processing. Familiarity with big data tools and technologies like Apache Kafka, Hadoop, or Spark. Experience with containerization and orchestration tools (e.g., Docker, Kubernetes). Exposure to data visualization tools or platforms.
Posted 1 week ago
3.0 years
5 - 9 Lacs
Kazhakuttam
Remote
Role Overview We are hiring a DevOps Engineer with expertise in AWS infrastructure, EC2, load balancers, DNS, network routing, and web server technologies. You will help build and manage secure, scalable, and high-availability infrastructure to support WAF services and microservice deployments. Key Responsibilities: Design, implement, and manage CI/CD pipelines for WAF components and internal cloud services. Provision and administer AWS EC2 instances, EBS volumes, and associated VM resources. Configure and maintain AWS Load Balancers (ALB/NLB), listener rules, target groups, and TLS termination. Set up and troubleshoot DNS records, zones, and routing policies using Route 53 or equivalent DNS services. Deploy and manage services on Kubernetes (EKS, GKE, AKS, or self-hosted clusters). Automate infrastructure using Terraform, Helm, and Ansible. Manage web server configurations (NGINX, Apache) including reverse proxy, SSL, and request routing. Handle TLS/SSL certificate management and traffic encryption. Monitor infrastructure with Prometheus, Grafana, CloudWatch, or ELK stack. Collaborate with security teams to implement WAF rules, hardening, and DevSecOps best practices. Participate in incident response, troubleshooting, and RCA documentation. Required Qualifications: 3+ years of experience in DevOps, SRE, or Infrastructure Engineering. Proficient in managing AWS EC2, Load Balancers, and cloud VM provisioning. Solid knowledge of DNS, routing concepts, and basic network troubleshooting. Experience configuring NGINX and Apache web servers (reverse proxy, SSL, performance tuning). Strong hands-on experience with Kubernetes, Docker, and containerized microservices. Infrastructure as Code (IaC) with Terraform, Helm, and Ansible. Experience with CI/CD tools like GitLab CI, Jenkins, or ArgoCD. Scripting in Bash, Python, or Go. Preferred Qualifications (Nice to Have) Experience configuring or deploying Web Application Firewalls (WAFs) such as Prophaze, ModSecurity, or AWS WAF. Familiarity with OWASP Top 10, container hardening, and CVE mitigation tools (Trivy, ZAP, etc.). Exposure to DevSecOps practices and multi-tenant SaaS environments. Relevant certifications such as: AWS Certified Solutions Architect – Associate or Professional AWS Certified DevOps Engineer Certified Kubernetes Administrator (CKA) Cisco Certified Network Associate (CCNA) Why Join Us: Build next-gen cloud infrastructure powering cutting-edge WAF technology. Work with a collaborative and fast-paced DevSecOps team. Flexible remote environment with opportunities for certification and upskilling. Directly influence performance, security, and scalability of core systems. Job Type: Full-time Pay: ₹45,000.00 - ₹80,000.00 per month Benefits: Health insurance Provident Fund Schedule: Day shift Monday to Friday Experience: DevOps: 3 years (Required) Work Location: In person
Posted 1 week ago
8.0 years
0 Lacs
Andhra Pradesh, India
On-site
Key Responsibilities We are currently seeking a skilled and experienced Java J2EE Developer with a minimum 8 years of hands-on experience Capability to create design solutions independently for a given module Develop and maintain web applications using Java Spring Boot user interfaces using HTML CSS JavaScript Write and maintain unit tests using Junit and Mockito Deploy and manage applications on servers such as JBoss WebLogic Apache and Nginx Ensure application security Familiarity with build tools such as Maven and Gradle Experience with caching technologies like Redis and Coherence Understanding of Spring Security Knowledge of Groovy is a plus Excellent problem-solving skills and attention to detail Strong communication and teamwork abilities Qualifications Bachelors degree in Computer Science Information Technology or a related field 6 8 years of experience in full stack development Proven track record of delivering high quality software solutions with cross functional teams to define design and ship new features Troubleshoot and resolve issues in a timely manner Stay updated with the latest industry trends and technologies Should have knowledge on SQL Required Skills Proficiency in HTML CSS and JavaScript Strong experience with Java and Spring frameworks Spring Boot SQL and with familiarity on CI/CD .
Posted 1 week ago
5.0 years
0 Lacs
Telangana
On-site
About Chubb Chubb is a world leader in insurance. With operations in 54 countries and territories, Chubb provides commercial and personal property and casualty insurance, personal accident and supplemental health insurance, reinsurance and life insurance to a diverse group of clients. The company is defined by its extensive product and service offerings, broad distribution capabilities, exceptional financial strength and local operations globally. Parent company Chubb Limited is listed on the New York Stock Exchange (NYSE: CB) and is a component of the S&P 500 index. Chubb employs approximately 40,000 people worldwide. Additional information can be found at: www.chubb.com . About Chubb India At Chubb India, we are on an exciting journey of digital transformation driven by a commitment to engineering excellence and analytics. We are proud to share that we have been officially certified as a Great Place to Work® for the third consecutive year, a reflection of the culture at Chubb where we believe in fostering an environment where everyone can thrive, innovate, and grow. With a team of over 2500 talented professionals, we encourage a start-up mindset that promotes collaboration, diverse perspectives, and a solution-driven attitude. We are dedicated to building expertise in engineering, analytics, and automation, empowering our teams to excel in a dynamic digital landscape. We offer an environment where you will be part of an organization that is dedicated to solving real-world challenges in the insurance industry. Together, we will work to shape the future through innovation and continuous learning. Primary Responsibilities: Implement and configure resources as per approved middleware patterns. Onboard, maintain and support enterprise applications deployed on middleware platforms. Keep applications and their environments (WebSphere, BizTalk, Tomcat, Apache/IIS) secure by ensuring all vulnerability patches/fixes are applied before due dates. Responsibility includes identifying opportunities for Patching Automation, Process Streamlining, Inventory Corrections, Coordination with 300+ Applications and Weekly/Monthly Reporting. Manage patching processes, including golden images. Support Axway or other SFTP\FTP products. Keep platforms and environments stable and available 24/7 by applying the latest industry trends in monitoring, alerting, and predicting possible issues and concerns. Analyze existing configurations and provide advice on possible improvements and environment enhancements. Provide administration support for IaaS and PaaS services, including backup and recovery, and first-level problem determination. Responsible for coordinating with multiple teams and groups for successful implementation of middleware platforms standards and procedures and all integrations such as Single-Sign-On, Job Schedulers, Cron Jobs, Load Balancers/VIP, Proxy servers, Password Management utilities etc. Collaborate with Core IT teams; provide guidance utilizing networking, security, monitoring, and services. Oversee application environments build activities before go-live, coordinate and plan go-live activities and provide middleware production support post application go-live. Document environmental build specifications, including diagrams, and scripts to automate processes. Assist technical teams in identifying appropriate approved middleware patterns to meet technical requirements. Create professional technical documentation. Collaborate with global resources outside of normal business hours (when needed). Participate in “non-business hours” on-call schedule. Provide technology leadership across enterprise shared services products and platforms in partnership with senior architects and product managers. Accountable for system availability and stability of Production Environments. Support SDLC and ITSM tools for the firm ensuring stability and best practices. Keep up with industry trends regarding APM, Observability and Telemetry products. Drive innovation and automation of supported products. Build and foster relationships with external LOBs for adoption of products. Establish customer experience feedback loop. Establish a Continuous Improvement mindset amongst the team. Qualifications: BS/BA degree or equivalent experience. Administrative knowledge (5+ years) of Middleware Products like WebSphere, Weblogic, Tomcat, Apache WebServer, IHS WebServer, BizTalk, IIS WebServer. Support experience of Axway or other SFTP\FTP products. Proven IT track record, with hands-on software development or production infrastructure management role. Strong hands-on experience supporting Linux platforms. Experience managing Windows systems. Proficient in developing and debugging scripts (BASH, PowerShell, Python). Intermediate level of expertise in networking concepts. Excellent problem determination skills, the ability to debug complex-cross system problems, and document root cause including remediation, detection, and avoidance. Ability to work independently and on a team with colleagues across the globe. Ability to manage multiple tasks concurrently. Self-starter; needs little administrative guidance. Energetic and eager to find solutions to complex problems. Ability to self-learn new technologies. Proven understanding hands-on experience providing support of middleware technologies. Practical experience with Tivoli, SCOM, Netcool, Splunk, Grafana Suite, or ScienceLogic. Ability to set goals and project plans that align with organizational objectives. Strong ability to partner and influence at all levels. Strong understanding of product management and product/customer centric organizations. Ability to collaborate with high-performing teams and individuals throughout the firm to accomplish common goals. Understanding of middleware, cloud, virtualization, and APIs Technologies. Experience with agile and lean philosophies. Exceptional verbal and written communication skills and proven ability to build strong relationships with internal and external groups. Strong product sense coupled with an ability to take a developer perspective. Experience with process improvement, workflow, benchmarking and / or evaluation of business processes required. Understanding and experience of working and leading agile teams. Familiar with CI/CD development, tools such as Jenkins, Jira, Git/BitBucket etc. Excellent verbal and written communication skills. Preferred Skills and Experience: Knowledge and hands-on experience with some of the following: Bash scripting PowerShell Python Ansible Terraform Active Directory DBMS (i.e. SQL Server, Postgres, MySQL, DB2, Oracle) Jenkins Git version control JSON Strong understanding of insurance services and the regulation that surrounds the industry Why Chubb? Join Chubb to be part of a leading global insurance company! Our constant focus on employee experience along with a start-up-like culture empowers you to achieve impactful results. Industry leader: Chubb is a world leader in the insurance industry, powered by underwriting and engineering excellence A Great Place to work: Chubb India has been recognized as a Great Place to Work® for the years 2023-2024, 2024-2025 and 2025-2026 Laser focus on excellence : At Chubb we pride ourselves on our culture of greatness where excellence is a mindset and a way of being. We constantly seek new and innovative ways to excel at work and deliver outstanding results Start-Up Culture : Embracing the spirit of a start-up, our focus on speed and agility enables us to respond swiftly to market requirements, while a culture of ownership empowers employees to drive results that matter Growth and success : As we continue to grow, we are steadfast in our commitment to provide our employees with the best work experience, enabling them to advance their careers in a conducive environment Employee Benefits Our company offers a comprehensive benefits package designed to support our employees’ health, well-being, and professional growth. Employees enjoy flexible work options, generous paid time off, and robust health coverage, including treatment for dental and vision related requirements. We invest in the future of our employees through continuous learning opportunities and career advancement programs, while fostering a supportive and inclusive work environment. Our benefits include: Savings and Investment plans: We provide specialized benefits like Corporate NPS (National Pension Scheme), Employee Stock Purchase Plan (ESPP), Long-Term Incentive Plan (LTIP), Retiral Benefits and Car Lease that help employees optimally plan their finances Upskilling and career growth opportunities: With a focus on continuous learning, we offer customized programs that support upskilling like Education Reimbursement Programs, Certification programs and access to global learning programs. Health and Welfare Benefits: We care about our employees’ well-being in and out of work and have benefits like Employee Assistance Program (EAP), Yearly Free Health campaigns and comprehensive Insurance benefits. Application Process Our recruitment process is designed to be transparent, and inclusive. Step 1 : Submit your application via the Chubb Careers Portal. Step 2 : Engage with our recruitment team for an initial discussion. Step 3 : Participate in HackerRank assessments/technical/functional interviews and assessments (if applicable). Step 4 : Final interaction with Chubb leadership. Join Us With you Chubb is better. Whether you are solving challenges on a global stage or creating innovative solutions for local markets, your contributions will help shape the future. If you value integrity, innovation, and inclusion , and are ready to make a difference, we invite you to be part of Chubb India’s journey .
Posted 1 week ago
0 years
0 Lacs
India
On-site
Location: IN - Hyderabad Telangana Goodyear Talent Acquisition Representative: Katrena Calimag-Rupera Sponsorship Available: No Relocation Assistance Available: No STAFF DIGITAL SOFTWARE ENGINEER – Data Engineer Are you interested in an exciting opportunity to help shape the user experience and design front-end applications for data-driven digital products that drive better process performance across a global company? The Data Driven Engineering and Global Information Technology groups group at the Goodyear Technology India Center, Hyderabad, India is looking for a dynamic individual with strong background in data engineering and infrastructure to partner with data scientists, information technology specialists as well as our global technology and operations teams to derive valuable insights from our expansive data sources and help develop data-driven solutions for important business applications across the company. Since its inception, the Data Science portfolio of projects continues to grow and includes areas of tire manufacturing, operations, business, and technology. The people in our Data Science group come from a broad range of backgrounds: Mathematics, Statistics, Cognitive Linguistics, Astrophysics, Biology, Computer Science, Mechanical, Electrical, Chemical, and Industrial Engineering, and of course - Data Science. This diverse group works together to develop innovative tools and methods for simulating, modeling, and analyzing complex processes throughout our company. We’d like you to help us build the next generation of data-driven applications for the company and be a part of the Information Technology and Data Driven Engineering teams. What You Will Do We think you’ll be excited about having opportunities to: Design and build robust, scalable, and efficient data pipelines and ETL processes to support analytics, data science, and digital products. Collaborate with cross-functional teams to understand data requirements and implement solutions that integrate data from diverse sources. Lead the development, management, and optimization of cloud-based data infrastructure using platforms such as AWS, Azure, or GCP. Architect and maintain highly available and performant relational database systems (e.g., PostgreSQL, MySQL) and NoSQL systems (e.g., MongoDB, DynamoDB). Partner with data scientists to ensure efficient and secure data access for modeling, experimentation, and production deployment. Build and maintain data services and APIs to facilitate access to curated datasets across internal applications and teams. Implement DevOps and DataOps practices including CI/CD for data workflows, infrastructure as code, containerization (Docker), and orchestration (Kubernetes). Learn about the tire industry and tire manufacturing processes from subject matter experts. Be a part of cross-functional teams working together to deliver impactful results. What We Expect Bachelor’s degree in computer science or a similar technical field; preferred: Master’s degree in computer science or a similar field 5 or more years of experience designing and maintaining data pipelines, cloud-based data systems, and production-grade data workflows. Experience with the following technology groups: Strong experience in Python, Java, or other languages for data engineering and scripting. Deep knowledge of relational databases (e.g., PostgreSQL, MySQL) and NoSQL databases (e.g., MongoDB, DynamoDB), including query optimization and schema design. Experience designing and deploying solutions on cloud platforms like AWS (e.g., S3, Redshift, RDS), Azure, or GCP. Familiarity with data modeling, data warehousing, and distributed data processing frameworks (e.g., Apache Spark, Airflow, dbt). Understanding of RESTful APIs and integration of data services with applications. Hands-on experience with CI/CD tools (e.g., GitHub Actions, Jenkins), Docker, Kubernetes, and infrastructure-as-code frameworks. Solid grasp of software engineering best practices, including code versioning, testing, and performance optimization. Good teamwork skills - ability to work in a team environment and deliver results on time. Strong communication skills - capable of conveying information concisely to diverse audiences. Goodyear is an Equal Employment Opportunity and Affirmative Action Employer. All qualified applicants will receive consideration for employment without regard to that individual's race, color, religion or creed, national origin or ancestry, sex (including pregnancy), sexual orientation, gender identity, age, physical or mental disability, ethnicity, citizenship, or any other characteristic protected by law. Goodyear is one of the world’s largest tire companies. It employs about 68,000 people and manufactures its products in 53 facilities in 20 countries around the world. Its two Innovation Centers in Akron, Ohio and Colmar-Berg, Luxembourg strive to develop state-of-the-art products and services that set the technology and performance standard for the industry. For more information about Goodyear and its products, go to www.goodyear.com/corporate
Posted 1 week ago
5.0 years
3 - 7 Lacs
Hyderābād
On-site
Job Title: Databricks Developer / Data Engineer Duration - 12 Months with Possible Extension Location: Hyderabad, Telangana (Hybrid) 1-2 days onsite at client location Job Summary: We are seeking a highly skilled Databricks Developer / Data Engineer with 5+ years of experience in building scalable data pipelines, managing large datasets, and optimizing data workflows in cloud environments. The ideal candidate will have hands-on expertise in Azure Databricks, Azure Data Factory, and other Azure-native services, playing a key role in enabling data-driven decision-making across the organization. Key Responsibilities: Design, develop, and maintain scalable ETL/ELT pipelines for data ingestion, transformation, and integration Work with both structured and unstructured data from a variety of internal and external sources Collaborate with data analysts, scientists, and engineers to ensure data quality, integrity, and availability Build and manage data lakes, data warehouses, and data models (Azure Databricks, Azure Data Factory, Snowflake, etc.) Optimize performance of large-scale batch and real-time processing systems Implement data governance , metadata management, and data lineage practices Monitor and troubleshoot pipeline issues; perform root cause analysis and proactive resolution Automate data validation and quality checks Ensure compliance with data privacy, security, and regulatory requirements Maintain thorough documentation of architecture, data workflows, and processes Mandatory Qualifications: 5+ years of hands-on experience with: Azure Blob Storage, Azure Data Lake Storage, Azure SQL Database Azure Logic Apps, Azure Data Factory, Azure Databricks, Azure ML Azure DevOps Services, Azure API Management, Webhooks Intermediate-level proficiency in Python scripting and PySpark Basic understanding of Power BI and visualization functionalities Technical Skills & Experience Required: Proficient in SQL and working with both relational and non-relational databases (e.g., SQL, PostgreSQL, MongoDB, Cassandra) Hands-on experience with Apache Spark, Hadoop, Hive for big data processing Proficiency in building scalable data pipelines using Azure Data Factory and Azure Databricks Solid knowledge of cloud-native tools : Delta Lake, Azure ML, Azure DevOps Understanding of data modeling , OLAP/OLTP systems , and data warehousing best practices Experience with CI/CD pipelines , version control with Git , and working with Azure Repos Knowledge of data security , privacy policies, and compliance frameworks Excellent problem-solving , troubleshooting , and analytical skills
Posted 1 week ago
0 years
0 Lacs
Telangana
On-site
About Chubb Chubb is a world leader in insurance. With operations in 54 countries and territories, Chubb provides commercial and personal property and casualty insurance, personal accident and supplemental health insurance, reinsurance and life insurance to a diverse group of clients. The company is defined by its extensive product and service offerings, broad distribution capabilities, exceptional financial strength and local operations globally. Parent company Chubb Limited is listed on the New York Stock Exchange (NYSE: CB) and is a component of the S&P 500 index. Chubb employs approximately 40,000 people worldwide. Additional information can be found at: www.chubb.com . About Chubb India At Chubb India, we are on an exciting journey of digital transformation driven by a commitment to engineering excellence and analytics. We are proud to share that we have been officially certified as a Great Place to Work® for the third consecutive year, a reflection of the culture at Chubb where we believe in fostering an environment where everyone can thrive, innovate, and grow. With a team of over 2500 talented professionals, we encourage a start-up mindset that promotes collaboration, diverse perspectives, and a solution-driven attitude. We are dedicated to building expertise in engineering, analytics, and automation, empowering our teams to excel in a dynamic digital landscape. We offer an environment where you will be part of an organization that is dedicated to solving real-world challenges in the insurance industry. Together, we will work to shape the future through innovation and continuous learning. Primary Responsibilities: Implement and configure resources as per approved middleware patterns. Onboard, maintain and support enterprise applications deployed on middleware platforms. Keep applications and their environments (WebSphere, BizTalk, Tomcat, Apache/IIS) secure by ensuring all vulnerability patches/fixes are applied before due dates. Responsibility includes identifying opportunities for Patching Automation, Process Streamlining, Inventory Corrections, Coordination with 300+ Applications and Weekly/Monthly Reporting. Manage patching processes, including golden images. Support Axway or other SFTP\FTP products. Keep platforms and environments stable and available 24/7 by applying the latest industry trends in monitoring, alerting, and predicting possible issues and concerns. Analyze existing configurations and provide advice on possible improvements and environment enhancements. Provide administration support for IaaS and PaaS services, including backup and recovery, and first-level problem determination. Responsible for coordinating with multiple teams and groups for successful implementation of middleware platforms standards and procedures and all integrations such as Single-Sign-On, Job Schedulers, Cron Jobs, Load Balancers/VIP, Proxy servers, Password Management utilities etc. Collaborate with Core IT teams; provide guidance utilizing networking, security, monitoring, and services. Oversee application environments build activities before go-live, coordinate and plan go-live activities and provide middleware production support post application go-live. Document environmental build specifications, including diagrams, and scripts to automate processes. Assist technical teams in identifying appropriate approved middleware patterns to meet technical requirements. Create professional technical documentation. Collaborate with global resources outside of normal business hours (when needed). Participate in “non-business hours” on-call schedule. Provide technology leadership across enterprise shared services products and platforms in partnership with senior architects and product managers. Accountable for system availability and stability of Production Environments. Support SDLC and ITSM tools for the firm ensuring stability and best practices. Keep up with industry trends regarding APM, Observability and Telemetry products. Drive innovation and automation of supported products. Build and foster relationships with external LOBs for adoption of products. Establish customer experience feedback loop. Establish a Continuous Improvement mindset amongst the team.
Posted 1 week ago
0 years
0 Lacs
India
On-site
Why We Work at Dun & Bradstreet Dun & Bradstreet unlocks the power of data through analytics, creating a better tomorrow. Each day, we are finding new ways to strengthen our award-winning culture and accelerate creativity, innovation and growth. Our 6,000+ global team members are passionate about what we do. We are dedicated to helping clients turn uncertainty into confidence, risk into opportunity and potential into prosperity. Bold and diverse thinkers are always welcome. Come join us! Learn more at dnb.com/careers . D&B is looking for an experienced Senior Golang Java Backend Developer to join our team in India and be instrumental in taking our products to the next level. In this role, you will be working in close collaboration with a team of highly empowered, experienced developers who are building a high-performance, highly scaled global platform. Responsibilities Want to conceive, build, and operate highly distributed systems deployed around the planet. Employ cutting-edge technologies and techniques in a rapidly evolving domain. Thrive in a progressive, environment which relies on communication and initiative rather than process to deliver at a high velocity. Have a "Product Owner" rather than a "Task Implementer" attitude Are curious and always improving your skill set Desired Qualifications Experience building systems involving messaging and/or event-driven architectures. Deep technical understanding of at least one of Core Java & Golang and willing to work with both. Strong handle on concurrency challenges and design solutions. Strong buyer of Agile/Lean values. Heavy emphasis on code testing and designing for testability. Maturity and aptitude to operate in a high-freedom/high-responsibility environment. Strong troubleshooting skills. Experience with techops, supporting and troubleshooting large systems. Exposure to devops automation such as Chef/Ansible. Exposure to IAAS platforms such as AWS EC2, Rackspace, etc… Experience with Apache Cassandra, Hadoop, or other NoSQL databases. Involvement in an open-source community. This position is internally titled as Senior Software Engineer All Dun & Bradstreet job postings can be found at https://www.dnb.com/about-us/careers-and-people/joblistings.html and https://jobs.lever.co/dnb . Official communication from Dun & Bradstreet will come from an email address ending in @dnb.com. Notice to Applicants: Please be advised that this job posting page is hosted and powered by Lever. Your use of this page is subject to Lever's Privacy Notice and Cookie Policy , which governs the processing of visitor data on this platform.
Posted 1 week ago
2.0 - 6.0 years
3 - 4 Lacs
Gurgaon
On-site
About AutoZone: AutoZone is the nation's leading retailer and a leading distributor of automotive replacement parts and accessories with more than 6,000 stores in the US, Puerto Rico, Mexico, and Brazil. Each store carries an extensive line of cars, sport utility vehicles, vans, and light trucks, including new and remanufactured hard parts, maintenance items, and accessories. We also sell automotive diagnostic and repair software through ALLDATA, diagnostic and repair information through ALLDATAdiy.com, automotive accessories through AutoAnything.com, and auto and light truck parts and accessories through AutoZone.com. Since opening its first store in Forrest City, Ark. on July 4, 1979, the company has joined the New York Stock Exchange (NYSE: AZO) and earned a spot in the Fortune 500. AutoZone has been committed to providing the best parts, prices, and customer service in the automotive aftermarket industry. We have a rich culture and history of going the Extra Mile for our customers and our community. At AutoZone you’re not just doing a job; you’re playing a crucial role in creating a better experience for our customers while creating opportunities to DRIVE YOUR CAREER almost anywhere! We are looking for talented, customer-focused people, enjoy helping others, and have the DRIVE to excel in a fast-paced environment! Position Summary We are seeking an experienced SAP CPI (Cloud Platform Integration) Technical Consultant with 2-6 years of hands-on experience in designing, developing, and implementing integration solutions using SAP CPI. The ideal candidate will have a strong technical background in SAP integration technologies, excellent problem-solving skills, and the ability to deliver end-to-end integration solutions in complex enterprise environments. This role involves collaborating with cross-functional teams to ensure seamless integration of SAP and non-SAP systems. Roles and Responsibilities Integration Design and Development: Design, develop, and implement integration scenarios using SAP CPI to connect SAP and non-SAP systems (e.g., S/4HANA, ECC, third-party applications, etc.). Create and configure iFlows (Integration Flows) to meet business requirements. Implement integration patterns such as A2A, B2B, and API-based integrations. Technical Expertise: Develop and customize integration artifacts like mappings (XSLT, Groovy, JavaScript), adapters (SOAP, REST, OData, SFTP, etc.), and security configurations. Configure and manage cloud connectors, API management, and event-based integrations. Ensure secure data exchange using encryption, certificates, and authentication mechanisms. Requirement Gathering and Analysis: Collaborate with business stakeholders and functional consultants to gather integration requirements. Translate business requirements into technical specifications for SAP CPI solutions. Testing and Deployment: Perform unit testing, integration testing, and support user acceptance testing (UAT). Troubleshoot and resolve integration issues during development, testing, and post-production phases. Deploy integration solutions and monitor performance in production environments. Performance Optimization: Optimize integration flows for performance, scalability, and reliability. Monitor and analyze CPI runtime performance using SAP Cloud Platform tools. Documentation and Training: Create and maintain technical documentation, including integration designs, configurations, and operational guides. Provide knowledge transfer and training to internal teams or end-users as needed. Collaboration and Support: Work closely with SAP functional teams, ABAP developers, and other technical consultants to deliver integrated solutions. Provide L2/L3 support for SAP CPI integrations and resolve incidents in a timely manner. Stay Updated: Keep abreast of the latest SAP CPI updates, features, and best practices. Recommend innovative solutions to enhance integration capabilities. Requirements: 1. Experience: 7-9 years of hands-on experience in SAP integration technologies, with at least 3-4 years focused on SAP CPI (Cloud Platform Integration). Proven experience in delivering end-to-end integration projects in SAP environments. Experience with SAP PI/PO is a plus. 2. Technical Skills: Strong expertise in developing iFlows using SAP CPI, including adapters (e.g., SOAP, REST, OData, IDoc, SFTP, HTTP). Proficiency in mapping techniques (Graphical Mapping, XSLT, Groovy, JavaScript). Knowledge of SAP Cloud Connector, API Management, and Open Connectors. Familiarity with security concepts like OAuth, SSL, PGP encryption, and certificate management. Experience integrating SAP systems (S/4HANA, ECC, SuccessFactors, Ariba, etc.) with non-SAP systems. 3. Soft Skills: Excellent communication and stakeholder management skills. Strong analytical and problem-solving abilities. Ability to work independently and in a team-oriented environment. Proven ability to manage multiple priorities and deliver projects on time. 4. Certifications: SAP Certified Technology Associate – SAP Integration Suite (preferred). Other relevant SAP certifications (e.g., PI/PO, S/4HANA) are a plus. 5. Knowledge on CIG and ISC Understanding CIG and ISC mapping is preferable 6. Professional Qualification Experience with SAP BTP (Business Technology Platform) and its services. Knowledge of other integration platforms like MuleSoft, Dell Boomi, or Apache Camel. Familiarity with hybrid integration scenarios involving on-premise and cloud systems. Experience with event-driven architectures (e.g., SAP Event Mesh). 7. Key Competencies Strong understanding of integration patterns and best practices. Ability to troubleshoot complex integration issues and provide root cause analysis. Proactive approach to learning and adopting new technologies. Our Values: An AutoZoner Always... PUTS CUSTOMERS FIRST CARES ABOUT PEOPLE STRIVES FOR EXCEPTIONAL PERFORMANCE ENERGIZES OTHERS EMBRACES DIVERSITY HELPS TEAMS SUCCEED
Posted 1 week ago
9.0 years
5 - 8 Lacs
Gurgaon
Remote
Job description About this role Are you interested in building innovative technology that crafts the financial markets? Do you like working at the speed of a startup, and solving some of the world’s most exciting challenges? Do you want to work with, and learn from, hands-on leaders in technology and finance? At BlackRock, we are looking for Software Engineers who like to innovate and solve sophisticated problems. We recognize that strength comes from diversity, and will embrace your outstanding skills, curiosity, and passion while giving you the opportunity to grow technically and as an individual. We invest and protect over $11.6 trillion (USD) of assets and have an extraordinary responsibility to our clients all over the world. Our technology empowers millions of investors to save for retirement, pay for college, buy a home, and improve their financial well-being. Being a technologist at BlackRock means you get the best of both worlds: working for one of the most sophisticated financial companies and being part of a software development team responsible for next generation technology and solutions. What are Aladdin and Aladdin Engineering? You will be working on BlackRock's investment operating system called Aladdin. Aladdin is used both internally within BlackRock and externally by many financial institutions. Aladdin combines sophisticated risk analytics with comprehensive portfolio management, trading, and operations tools on a single platform to power informed decision-making and create a connective tissue for thousands of users investing worldwide. Our development teams reside inside the Aladdin Engineering group. We collaboratively build the next generation of technology that changes the way information, people, and technology intersect for global investment firms. We build and package tools that manage trillions in assets and supports millions of financial instruments. We perform risk calculations and process millions of transactions for thousands of users every day worldwide! Being a member of Aladdin Engineering, you will be: Tenacious: Work in a fast paced and highly complex environment Creative thinker: Analyse multiple solutions and deploy technologies in a flexible way. Great teammate: Think and work collaboratively and communicate effectively. Fast learner: Pick up new concepts and apply them quickly. Responsibilities include: Collaborate with team members in a multi-office, multi-country environment. Deliver high efficiency, high availability, concurrent and fault tolerant software systems. Significantly contribute to development of Aladdin’s global, multi-asset trading platform. Work with product management and business users to define the roadmap for the product. Design and develop innovative solutions to complex problems, identifying issues and roadblocks. Apply validated quality software engineering practices through all phases of development. Ensure resilience and stability through quality code reviews, unit, regression and user acceptance testing, dev ops and level two production support. Be a leader with vision and a partner in brainstorming solutions for team productivity, efficiency, guiding and motivating others. Drive a strong culture by bringing principles of inclusion and diversity to the team and setting the tone through specific recruiting, management actions and employee engagement. Candidate should be able to lead individual projects priorities, deadlines and deliverables using AGILE methodologies. Qualifications: B.E./ B.TECH./ MCA or any other relevant engineering degree from a reputed university. 9+ years of proven experience Skills and Experience: A proven foundation in core Java and related technologies, with OO skills and design patterns Hands-on experience in designing and writing code with object-oriented programming knowledge in Java, Spring, TypeScript, JavaScript, Microservices, Angular , React. Strong knowledge of Open-Source technology stack (Spring, Hibernate, Maven, JUnit, etc.). Exposure to building microservices and APIs ideally with REST, Kafka or gRPC. Experience with relational database and/or NoSQL Database (e.g., Apache Cassandra) Exposure to high scale distributed technology like Kafka, Mongo, Ignite, Redis Track record building high quality software with design-focused and test-driven approaches Great analytical, problem-solving and communication skills Some experience or a real interest in finance, investment processes, and/or an ability to translate business problems into technical solutions. Candidate should have experience leading development teams, projects or being responsible for the design and technical quality of a significant application, system, or component. Ability to form positive relationships with partnering teams, sponsors, and user groups. Nice to have and opportunities to learn: Experience working in an agile development team or on open-source development projects. Experience with optimization, algorithms or related quantitative processes. Experience with Cloud platforms like Microsoft Azure, AWS, Google Cloud Experience with DevOps and tools like Azure DevOps Experience with AI-related projects/products or experience working in an AI research environment. A degree, certifications or opensource track record that shows you have a mastery of software engineering principles. Our benefits To help you stay energized, engaged and inspired, we offer a wide range of benefits including a strong retirement plan, tuition reimbursement, comprehensive healthcare, support for working parents and Flexible Time Off (FTO) so you can relax, recharge and be there for the people you care about. Our hybrid work model BlackRock’s hybrid work model is designed to enable a culture of collaboration and apprenticeship that enriches the experience of our employees, while supporting flexibility for all. Employees are currently required to work at least 4 days in the office per week, with the flexibility to work from home 1 day a week. Some business groups may require more time in the office due to their roles and responsibilities. We remain focused on increasing the impactful moments that arise when we work together in person – aligned with our commitment to performance and innovation. As a new joiner, you can count on this hybrid model to accelerate your learning and onboarding experience here at BlackRock. About BlackRock At BlackRock, we are all connected by one mission: to help more and more people experience financial well-being. Our clients, and the people they serve, are saving for retirement, paying for their children’s educations, buying homes and starting businesses. Their investments also help to strengthen the global economy: support businesses small and large; finance infrastructure projects that connect and power cities; and facilitate innovations that drive progress. This mission would not be possible without our smartest investment – the one we make in our employees. It’s why we’re dedicated to creating an environment where our colleagues feel welcomed, valued and supported with networks, benefits and development opportunities to help them thrive. For additional information on BlackRock, please visit @blackrock | Twitter: @blackrock | LinkedIn: www.linkedin.com/company/blackrock BlackRock is proud to be an Equal Opportunity Employer. We evaluate qualified applicants without regard to age, disability, family status, gender identity, race, religion, sex, sexual orientation and other protected attributes at law. Job Requisition # R253011
Posted 1 week ago
4.0 years
0 Lacs
Hyderabad, Telangana, India
Remote
Overview PepsiCo operates in an environment undergoing immense and rapid change. Big-data and digital technologies are driving business transformation that is unlocking new capabilities and business innovations in areas like eCommerce, mobile experiences and IoT. The key to winning in these areas is being able to leverage enterprise data foundations built on PepsiCo’s global business scale to enable business insights, advanced analytics, and new product development. PepsiCo’s Data Management and Operations team is tasked with the responsibility of developing quality data collection processes, maintaining the integrity of our data foundations, and enabling business leaders and data scientists across the company to have rapid access to the data they need for decision-making and innovation. What PepsiCo Data Management and Operations does: Maintain a predictable, transparent, global operating rhythm that ensures always-on access to high-quality data for stakeholders across the company. Responsible for day-to-day data collection, transportation, maintenance/curation, and access to the PepsiCo corporate data asset Work cross-functionally across the enterprise to centralize data and standardize it for use by business, data science or other stakeholders. Increase awareness about available data and democratize access to it across the company. As a data engineer, you will be the key technical expert building PepsiCo's data products to drive a strong vision. You'll be empowered to create data pipelines into various source systems, rest data on the PepsiCo Data Lake, and enable exploration and access for analytics, visualization, machine learning, and product development efforts across the company. As a member of the data engineering team, you will help developing very large and complex data applications into public cloud environments directly impacting the design, architecture, and implementation of PepsiCo's flagship data products around topics like revenue management, supply chain, manufacturing, and logistics. You will work closely with process owners, product owners and business users. You'll be working in a hybrid environment with in-house, on-premises data sources as well as cloud and remote systems. Responsibilities Act as a subject matter expert across different digital projects. Oversee work with internal clients and external partners to structure and store data into unified taxonomies and link them together with standard identifiers. Manage and scale data pipelines from internal and external data sources to support new product launches and drive data quality across data products. Build and own the automation and monitoring frameworks that captures metrics and operational KPIs for data pipeline quality and performance. Responsible for implementing best practices around systems integration, security, performance, and data management. Empower the business by creating value through the increased adoption of data, data science and business intelligence landscape. Collaborate with internal clients (data science and product teams) to drive solutioning and POC discussions. Evolve the architectural capabilities and maturity of the data platform by engaging with enterprise architects and strategic internal and external partners. Develop and optimize procedures to “productionalize” data science models. Define and manage SLA’s for data products and processes running in production. Support large-scale experimentation done by data scientists. Prototype new approaches and build solutions at scale. Research in state-of-the-art methodologies. Create documentation for learnings and knowledge transfer. Create and audit reusable packages or libraries. Qualifications 4+ years of overall technology experience that includes at least 3+ years of hands-on software development, data engineering, and systems architecture. 3+ years of experience with Data Lake Infrastructure, Data Warehousing, and Data Analytics tools. 3+ years of experience in SQL optimization and performance tuning, and development experience in programming languages like Python, PySpark, Scala etc.). 2+ years in cloud data engineering experience in Azure. Fluent with Azure cloud services. Azure Certification is a plus. Experience in Azure Log Analytics Experience with integration of multi cloud services with on-premises technologies. Experience with data modelling, data warehousing, and building high-volume ETL/ELT pipelines. Experience with data profiling and data quality tools like Apache Griffin, Deequ, and Great Expectations. Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets. Experience with at least one MPP database technology such as Redshift, Synapse or Snowflake. Experience with Azure Data Factory, Azure Databricks and Azure Machine learning tools. Experience with Statistical/ML techniques is a plus. Experience with building solutions in the retail or in the supply chain space is a plus. Experience with version control systems like Github and deployment & CI tools. Working knowledge of agile development, including DevOps and DataOps concepts. B Tech/BA/BS in Computer Science, Math, Physics, or other technical fields. Skills, Abilities, Knowledge: Excellent communication skills, both verbal and written, along with the ability to influence and demonstrate confidence in communications with senior level management. Strong change manager. Comfortable with change, especially that which arises through company growth. Ability to understand and translate business requirements into data and technical requirements. High degree of organization and ability to manage multiple, competing projects and priorities simultaneously. Positive and flexible attitude to enable adjusting to different needs in an ever-changing environment. Strong organizational and interpersonal skills; comfortable managing trade-offs. Foster a team culture of accountability, communication, and self-management. Proactively drives impact and engagement while bringing others along. Consistently attain/exceed individual and team goals.
Posted 1 week ago
0 years
0 - 3 Lacs
Mohali
On-site
Job Summary: Location_ Mohali (Local Candidates only) Experience- 1-3 Yrs We are looking for a skilled Linux System Administrator to manage and support our live server environments, including applications built with React and Node.js . The ideal candidate should also have strong hands-on experience in network administration (LAN/WAN) , hardware troubleshooting , and Windows system support . Key Responsibilities: Deploy, monitor, and manage live servers hosting React, Node.js, and other web applications. Configure and maintain Linux-based servers (Ubuntu, CentOS, etc.). Perform regular updates, security patches, and system performance tuning. Manage server deployments, backups, and restore operations. Troubleshoot system and application issues related to live environments. Set up and manage networking systems including LAN/WAN, firewalls, VPNs, and routers. Provide basic to advanced hardware support (servers, storage, workstations). Assist in system upgrades and migration of services. Handle Windows-based systems when required (basic server roles, user management, etc.). Work closely with development and DevOps teams to support continuous deployment pipelines. Ensure system security through proper access control, monitoring, and auditing. Required Skills: Strong hands-on experience with Linux systems administration. Familiarity with hosting and managing live applications using React and Node.js . Networking expertise (LAN/WAN, routing, firewall configuration, etc.). Proficiency in shell scripting and command-line utilities. Basic Windows server administration skills. Experience with web servers (Apache, Nginx), and cloud hosting (optional: AWS, GCP). Strong problem-solving skills and the ability to work independently under pressure. Job Types: Full-time, Permanent Pay: ₹8,321.42 - ₹27,726.27 per month Work Location: In person
Posted 1 week ago
2.0 - 5.0 years
10 - 18 Lacs
Mohali
Remote
Job Summary We are looking for an experienced Python Developer to join our dynamic development team. The ideal candidate will have 2 to 5 years of experience in building scalable backend applications and APIs using modern Python frameworks. This role requires a strong foundation in object-oriented programming, web technologies, and collaborative software development. You will work closely with the design, frontend, and DevOps teams to deliver robust and high-performance solutions. Key Responsibilities - Develop, test, and maintain backend applications using Django, Flask, or FastAPI. Build RESTful APIs and integrate third-party services to enhance platform capabilities. Utilize data handling libraries like Pandas and NumPy for efficient data processing. Write clean, maintainable, and well-documented code that adheres to industry best practices. Participate in code reviews and mentor junior developers. Collaborate in Agile teams using Scrum or Kanban workflows. Troubleshoot and debug production issues with a proactive and analytical approach. Required Qualifications . 2 to 5 years of experience in backend development with Python. Proficiency in core and advanced Python concepts, including OOP and asynchronous programming. Strong command over at least one Python framework (Django, Flask, or FastAPI). Experience with data libraries like Pandas and NumPy. Understanding of authentication/authorization mechanisms, middleware, and dependency injection. Familiarity with version control systems like Git. Comfortable working in Linux environments. Must-Have Skills Expertise in backend Python development and web frameworks. Strong debugging, problem-solving, and optimization skills. Experience with API development and microservices architecture. Deep understanding of software design principles and security best practices. Good-to-Have Skills Experience with Generative AI frameworks (e.g., LangChain, Transformers, OpenAI APIs). Exposure to Machine Learning libraries (e.g., Scikit-learn, TensorFlow, PyTorch). Knowledge of containerization tools (Docker, Kubernetes). Familiarity with web servers (e.g., Apache, Nginx) and deployment architectures. Understanding of asynchronous programming and task queues (e.g., Celery, AsyncIO). Familiarity with Agile practices and tools like Jira or Trello. Exposure to CI/CD pipelines and cloud platforms (AWS, GCP, Azure). Company Overview We specialize in delivering cutting-edge solutions in custom software, web, and AI development. Our work culture is a unique blend of in-office and remote collaboration, prioritizing our employees above everything else. At our company, you’ll find an environment where continuous learning, leadership opportunities, and mutual respect thrive. We are proud to foster a culture where individuals are valued, encouraged to evolve, and supported in achieving their fullest potential. Benefits and Perks Competitive Salary: Earn up to ₹10 –18 LPA based on skills and experience. Generous Time Off: Benefit from 18 annual holidays to maintain a healthy work-life balance. Continuous Learning: Access extensive learning opportunities while working on cutting-edge projects. Client Exposure: Gain valuable experience in client-facing roles to enhance your professional growth. Job Type: Full-time Pay: ₹1,000,000.00 - ₹1,800,000.00 per year Benefits: Leave encashment Paid sick time Paid time off Application Question(s): How many years of experience do you have working with Python in backend development? How comfortable are you with creating and integrating RESTful APIs? Rate your skills in between- 1 to 10. Do you have experience working with Pandas, NumPy, or any data processing libraries in Python? Your Expected Salary..? Your Current salary..? Experience: Python Developer: 3 years (Preferred) Work Location: In person
Posted 1 week ago
2.0 years
1 - 5 Lacs
Ahmedabad
On-site
Experience: 2+ years in AI/ML, with hands-on development & leadership Key Responsibilities: ● Architect, develop, and deploy AI/ML solutions across various business domains. ● Research and implement cutting-edge deep learning, NLP, and computer vision models. ● Optimize AI models for performance, scalability, and real-time inference. ● Develop and manage data pipelines, model training, and inference workflows. ● Integrate AI solutions into microservices and APIs using scalable architectures. ● Lead AI-driven automation and decision-making systems. ● Ensure model monitoring, explainability, and continuous improvement in production. ● Collaborate with data engineering, software development, and DevOps teams. ● Stay updated with LLMs, transformers, federated learning, and AI ethics. ● Mentor AI engineers and drive AI research & development initiatives. Technical Requirements: ● Programming: Python (NumPy, Pandas, Scikit-learn). ● Deep Learning Frameworks: TensorFlow, PyTorch, JAX. ● NLP & LLMs: Hugging Face Transformers, BERT, GPT models, RAG, fine-tuning LLMs. ● Computer Vision: OpenCV, YOLO, Faster R-CNN, Vision Transformers (ViTs). ● Data Engineering: Spark, Dask, Apache Kafka, SQL/NoSQL databases. ● Cloud & MLOps: AWS/GCP/Azure, Kubernetes, Docker, CI/CD for ML pipelines. ● Optimization & Scaling: Model quantization, pruning, knowledge distillation. ● Big Data & Distributed Computing: Ray, Dask, TensorRT, ONNX. ● Security & Ethics: Responsible AI, Bias detection, Model explainability (SHAP, LIME). Preferred Qualifications: ● Experience with real-time AI applications, reinforcement learning, or edge AI. ● Contributions to AI research (publications, open-source contributions). ● Experience integrating AI with ERP, CRM, or enterprise solutions. Job Types: Full-time, Permanent Pay: ₹100,000.00 - ₹500,000.00 per year Schedule: Day shift Application Question(s): What is your current CTC? Experience: AI: 2 years (Required) Machine learning: 2 years (Required) Work Location: In person
Posted 1 week ago
8.0 - 12.0 years
1 - 6 Lacs
Noida
On-site
Key Responsibilities: Work with development teams and product managers to ideate software solutions Design client-side and server-side architecture Build the front-end of applications through appealing visual design Develop and manage well-functioning databases and applications Write effective APIs Test software to ensure responsiveness and efficiency Troubleshoot, debug and upgrade software Create security and data protection settings Build features and applications with a mobile responsive design Write technical documentation Work with AIML Engineers, data scientists and analysts to improve software Collaborate with cross-functional teams, including SAP Machine learning team, customers to define requirements and implement AI solutions Establish and enforce data governance standards implement best practices for data privacy and protection of applications Qualifications and Education Requirements: Bachelor’s/Master’s degree in computer science, data science, mathematics or a related field. At least 8-12 years’ experience in building Al/ML applications Preferred Skills: Very good understanding of Agile project methodologies Experience working of multiple front-end languages and libraries (e.g. HTML/ CSS, JavaScript, XML, JSON, jQuery, Bootstrap) Experience working of multiple back-end languages (e.g. Python, J2EE) and JavaScript frameworks (e.g. Angular, React, Node.js) Worked on databases (e.g. MySQL, SQLServer, MongoDB), application servers (e.g. Django, JBoss, Apache) and UI/UX design Has lead the team for atleast 4 years. Has worked on providing end to end solution Great communication and collaboration skills Self-starter, Entrepreneurial, result oriented mindset Excellent communication, negotiation, and interpersonal skills, Qualifications Bachelor’s/Master’s degree in computer science, data science, mathematics or a related field
Posted 1 week ago
5.0 - 7.0 years
0 Lacs
Noida
On-site
5 - 7 Years 2 Openings Noida Role description Role Proficiency: This role requires proficiency in data pipeline development including coding and testing data pipelines for ingesting wrangling transforming and joining data from various sources. Must be skilled in ETL tools such as Informatica Glue Databricks and DataProc with coding expertise in Python PySpark and SQL. Works independently and has a deep understanding of data warehousing solutions including Snowflake BigQuery Lakehouse and Delta Lake. Capable of calculating costs and understanding performance issues related to data solutions. Outcomes: Act creatively to develop pipelines and applications by selecting appropriate technical options optimizing application development maintenance and performance using design patterns and reusing proven solutions.rnInterpret requirements to create optimal architecture and design developing solutions in accordance with specifications. Document and communicate milestones/stages for end-to-end delivery. Code adhering to best coding standards debug and test solutions to deliver best-in-class quality. Perform performance tuning of code and align it with the appropriate infrastructure to optimize efficiency. Validate results with user representatives integrating the overall solution seamlessly. Develop and manage data storage solutions including relational databases NoSQL databases and data lakes. Stay updated on the latest trends and best practices in data engineering cloud technologies and big data tools. Influence and improve customer satisfaction through effective data solutions. Measures of Outcomes: Adherence to engineering processes and standards Adherence to schedule / timelines Adhere to SLAs where applicable # of defects post delivery # of non-compliance issues Reduction of reoccurrence of known defects Quickly turnaround production bugs Completion of applicable technical/domain certifications Completion of all mandatory training requirements Efficiency improvements in data pipelines (e.g. reduced resource consumption faster run times). Average time to detect respond to and resolve pipeline failures or data issues. Number of data security incidents or compliance breaches. Outputs Expected: Code Development: Develop data processing code independently ensuring it meets performance and scalability requirements. Define coding standards templates and checklists. Review code for team members and peers. Documentation: Create and review templates checklists guidelines and standards for design processes and development. Create and review deliverable documents including design documents architecture documents infrastructure costing business requirements source-target mappings test cases and results. Configuration: Define and govern the configuration management plan. Ensure compliance within the team. Testing: Review and create unit test cases scenarios and execution plans. Review the test plan and test strategy developed by the testing team. Provide clarifications and support to the testing team as needed. Domain Relevance: Advise data engineers on the design and development of features and components demonstrating a deeper understanding of business needs. Learn about customer domains to identify opportunities for value addition. Complete relevant domain certifications to enhance expertise. Project Management: Manage the delivery of modules effectively. Defect Management: Perform root cause analysis (RCA) and mitigation of defects. Identify defect trends and take proactive measures to improve quality. Estimation: Create and provide input for effort and size estimation for projects. Knowledge Management: Consume and contribute to project-related documents SharePoint libraries and client universities. Review reusable documents created by the team. Release Management: Execute and monitor the release process to ensure smooth transitions. Design Contribution: Contribute to the creation of high-level design (HLD) low-level design (LLD) and system architecture for applications business components and data models. Customer Interface: Clarify requirements and provide guidance to the development team. Present design options to customers and conduct product demonstrations. Team Management: Set FAST goals and provide constructive feedback. Understand team members' aspirations and provide guidance and opportunities for growth. Ensure team engagement in projects and initiatives. Certifications: Obtain relevant domain and technology certifications to stay competitive and informed. Skill Examples: Proficiency in SQL Python or other programming languages used for data manipulation. Experience with ETL tools such as Apache Airflow Talend Informatica AWS Glue Dataproc and Azure ADF. Hands-on experience with cloud platforms like AWS Azure or Google Cloud particularly with data-related services (e.g. AWS Glue BigQuery). Conduct tests on data pipelines and evaluate results against data quality and performance specifications. Experience in performance tuning of data processes. Expertise in designing and optimizing data warehouses for cost efficiency. Ability to apply and optimize data models for efficient storage retrieval and processing of large datasets. Capacity to clearly explain and communicate design and development aspects to customers. Ability to estimate time and resource requirements for developing and debugging features or components. Knowledge Examples: Knowledge Examples Knowledge of various ETL services offered by cloud providers including Apache PySpark AWS Glue GCP DataProc/DataFlow Azure ADF and ADLF. Proficiency in SQL for analytics including windowing functions. Understanding of data schemas and models relevant to various business contexts. Familiarity with domain-related data and its implications. Expertise in data warehousing optimization techniques. Knowledge of data security concepts and best practices. Familiarity with design patterns and frameworks in data engineering. Additional Comments: Skills Cloud Platforms ( AWS, MS Azure, GC etc.) Containerization and Orchestration ( Docker, Kubernetes etc..) APIs - Change APIs to APIs development Data Pipeline construction using languages like Python, PySpark, and SQL Data Streaming (Kafka and Azure Event Hub etc..) Data Parsing ( Akka and MinIO etc..) Database Management ( SQL and NoSQL, including Clickhouse, PostgreSQL etc..) Agile Methodology ( Git, Jenkins, or Azure DevOps etc..) JS like Connectors/ framework for frontend/backend Collaboration and Communication Skills Aws Cloud,Azure Cloud,Docker,Kubernetes About UST UST is a global digital transformation solutions provider. For more than 20 years, UST has worked side by side with the world’s best companies to make a real impact through transformation. Powered by technology, inspired by people and led by purpose, UST partners with their clients from design to operation. With deep domain expertise and a future-proof philosophy, UST embeds innovation and agility into their clients’ organizations. With over 30,000 employees in 30 countries, UST builds for boundless impact—touching billions of lives in the process.
Posted 1 week ago
0 years
0 Lacs
Andhra Pradesh
On-site
We are seeking a highly skilled and motivated Big Data Engineer to join our data engineering team. The ideal candidate will have hands-on experience with Hadoop ecosystem, Apache Spark, and programming expertise in Python (PySpark), Scala, and Java. You will be responsible for designing, developing, and optimizing scalable data pipelines and big data solutions to support analytics and business intelligence initiatives. About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.
Posted 1 week ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Organizations everywhere struggle under the crushing costs and complexities of “solutions” that promise to simplify their lives. To create a better experience for their customers and employees. To help them grow. Software is a choice that can make or break a business. Create better or worse experiences. Propel or throttle growth. Business software has become a blocker instead of ways to get work done. There’s another option. Freshworks. With a fresh vision for how the world works. At Freshworks, we build uncomplicated service software that delivers exceptional customer and employee experiences. Our enterprise-grade solutions are powerful, yet easy to use, and quick to deliver results. Our people-first approach to AI eliminates friction, making employees more effective and organizations more productive. Over 72,000 companies, including Bridgestone, New Balance, Nucor, S&P Global, and Sony Music, trust Freshworks’ customer experience (CX) and employee experience (EX) software to fuel customer loyalty and service efficiency. And, over 4,500 Freshworks employees make this possible, all around the world. Fresh vision. Real impact. Come build it with us. Job Description As a Senior NOC Engineer , you will play a vital role in ensuring the health, stability, and uptime of our production systems. This is a hands-on, operational role requiring a deep understanding of system administration, networking, and incident response. You’ll act as the first line of defense during outages and performance issues, with responsibility for real-time monitoring, troubleshooting, and driving incident resolution in a 24/7 environment. If you enjoy working with infrastructure at scale and thrive in fast-paced environments, this is the role for you. Roles & Responsibilities Monitor production systems and applications to ensure consistent uptime, performance, and availability Respond to and manage incidents, alerts, and outages in real time, coordinating appropriate responses Conduct root cause analysis (RCA) and implement corrective and preventive actions Troubleshoot system, application, and network issues escalated by monitoring systems or support teams Participate in 24/7 shift rotations, including weekends and holidays, to ensure continuous support Collaborate with engineering and product teams to improve observability and monitoring frameworks Develop and update SOPs, runbooks, and internal knowledge bases to ensure process consistency Maintain compliance with internal security, audit, and operational standards Recommend and implement automation and monitoring improvements to increase efficiency and reduce incident frequency Engage in post-incident reviews and help drive blameless postmortems and process improvement initiatives Qualifications 3+ years of hands-on experience in Linux/Unix systems administration and network troubleshooting Solid grasp of internet and network protocols: DNS, DHCP, TCP/IP, NTP, SMTP, VPNs, HTTPS, TLS, IPSec Experience monitoring and managing applications like Apache, Tomcat, MySQL Proficient in scripting using Shell, Python, or Ruby for automation Experience with monitoring/logging tools such as Nagios, Datadog, New Relic, ELK, Splunk, or Sumo Logic Familiarity with incident management platforms like PagerDuty, JIRA, or ServiceNow Basic knowledge of web technologies including HTML, CSS, JavaScript, and backend fundamentals Experience with public cloud platforms (preferably AWS) Hands-on experience with Docker and Kubernetes Working knowledge of CI/CD pipelines and tools like Jenkins Familiarity with Infrastructure-as-Code using Terraform Excellent communication skills and ability to work with cross-functional teams including DevOps, SRE, and Security Skills Inventory Production Monitoring : Real-time infrastructure and application monitoring for uptime and performance Incident Response : Timely identification, escalation, and resolution of production issues Root Cause Analysis : Investigation and documentation of service-impacting events Linux/Unix Administration : Deep expertise in managing server environments Networking Fundamentals : Strong understanding of protocols like DNS, DHCP, TCP/IP, VPN Scripting & Automation : Writing scripts in Shell/Python/Ruby to automate tasks Monitoring & Logging Tools : Hands-on use of tools like Datadog, ELK, Nagios, Splunk Cloud Infrastructure : Working with AWS or equivalent public cloud platforms Containers & Orchestration : Knowledge of Docker and Kubernetes CI/CD & DevOps : Familiarity with Jenkins and deployment pipelines Infrastructure as Code : Basic experience using Terraform Collaboration : Strong coordination with SRE, Security, and Engineering teams Compliance & Documentation : Creating SOPs, playbooks, and ensuring adherence to policies Additional Information At Freshworks, we are creating a global workplace that enables everyone to find their true potential, purpose, and passion irrespective of their background, gender, race, sexual orientation, religion and ethnicity. We are committed to providing equal opportunity for all and believe that diversity in the workplace creates a more vibrant, richer work environment that advances the goals of our employees, communities and the business.
Posted 1 week ago
5.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Key Responsibilities: · Azure Cloud s Databricks: o Design and build efficient data pipelines using Azure Databricks (PySpark). o Implement business logic for data transformation and enrichment at scale. o Manage and optimize Delta Lake storage solutions. · API Development: o Develop REST APIs using FastAPI to expose processed data. o Deploy APIs on Azure Functions for scalable and serverless data access. · Data Orchestration s ETL: o Develop and manage Airflow DAGs to orchestrate ETL processes. o Ingest and process data from various internal and external sources on a scheduled basis. · Database Management: o Handle data storage and access using PostgreSQL and MongoDB. o Write optimized SQL queries to support downstream applications and analytics. · Collaboration: o Work cross-functionally with teams to deliver reliable, high-performance data solutions. o Follow best practices in code quality, version control, and documentation. Required Skills s Experience: · 5+ years of hands-on experience as a Data Engineer. · Strong experience with Azure Cloud services. · Proficient in Azure Databricks, PySpark, and Delta Lake. · Solid experience with Python and FastAPI for API development. · Experience with Azure Functions for serverless API deployments. · Skilled in managing ETL pipelines using Apache Airflow. · Hands-on experience with PostgreSQL and MongoDB. · Strong SQL skills and experience handling large datasets.
Posted 1 week ago
1.0 - 3.0 years
0 Lacs
Pune, Maharashtra, India
On-site
About The Company TSC Redefines Connectivity with Innovation and IntelligenceDriving the next level of intelligence powered by Cloud, Mobility, Internet of Things, Collaboration, Security, Media services and Network services, we at Tata Communications are envisaging a New World of Communications Job Title: Linux Support Engineer – Level 1 (with L2 Task Awareness) Location: Pune Experience: 1 to 3 years Shift: Rotational (24x7 support) Job Type: Full-time Job Summary We are seeking a dedicated L1 Linux Support Engineer to provide frontline operational support for enterprise Linux servers. The engineer will focus primarily on L1 responsibilities, but must also have basic to intermediate understanding of L2 tasks for occasional escalated activity handling and team backup. Key Responsibilities L1 Responsibilities (Primary): Monitor system performance, server health, and basic services using tools like Nagios, Zabbix, or similar. Handle tickets for standard issues like disk space, service restarts, log checks, user creation, and permission troubleshooting. Basic troubleshooting of server access issues (SSH, sudo access, etc.). Perform routine activities such as patching coordination, backup monitoring, antivirus checks, and compliance tasks. Execute pre-defined SOPs and escalation procedures in case of critical alerts or failures. Regularly update incident/ticket tracking systems (e.g., ServiceNow, Remedy). Provide Hands-and-feet Support At Data Center If Required. L2 Awareness (Secondary / Occasional Tasks): Understand LVM management, disk extension, and logical volume creation. Awareness of service and daemon-level troubleshooting (Apache, NGINX, SSH, Cron, etc.). Ability to assist in OS patching, kernel updates, and troubleshooting post-patch issues. Exposure to basic scripting (Bash, Shell) to automate repetitive tasks. Familiarity with tools like Red Hat Satellite, Ansible, and centralized logging (e.g., syslog, journalctl). Understand basic clustering, HA concepts, and DR readiness tasks. Assist L2 team during major incidents or planned changes. Required Skills Hands-on with RHEL, CentOS, Ubuntu or other Enterprise Linux distributions. Basic knowledge of Linux command-line tools, file systems, and system logs. Good understanding of Linux boot process, run levels, and systemd services. Basic networking knowledge (ping, traceroute, netstat, etc.). Familiar with ITSM tools and ticketing process. Nice to Have RHCSA Certification (preferred). Exposure to virtualization (VMware, KVM) and cloud environments (AWS, Azure). Experience With Shell Scripting Or Python For Automation. Understanding of ITIL framework. Soft Skills Strong communication and coordination skills. Ability to follow instructions and SOPs. Willingness to learn and take ownership of tasks. Team player with a proactive mindset.
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39928 Jobs | Dublin
Wipro
19400 Jobs | Bengaluru
Accenture in India
15955 Jobs | Dublin 2
EY
15128 Jobs | London
Uplers
11280 Jobs | Ahmedabad
Amazon
10521 Jobs | Seattle,WA
Oracle
9339 Jobs | Redwood City
IBM
9274 Jobs | Armonk
Accenture services Pvt Ltd
7978 Jobs |
Capgemini
7754 Jobs | Paris,France