Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Job Description Oracle’s Cloud Infrastructure team is building services that operate at high scale in a broadly distributed multi-tenant cloud environment. Our customers run their businesses on our cloud, and our mission is to provide them with best-in-class compute, storage, networking, database, security, and an ever-expanding set of foundational cloud-based services. We’re looking for hands-on engineers with expertise and passion in solving difficult problems in distributed systems and highly available services. If this is you, at Oracle you can design and build innovative new systems from the ground up. These are exciting times in our space - we are growing fast, still at an early stage, and working on ambitious new initiatives. An engineer at any level can have significant technical and business impact. Responsibilities As a Senior Member of Technical Staff you will own the software design and development for major components of Oracle’s Cloud Infrastructure. You should be both a rock solid coder and a distributed systems generalist, able to dive deep into any part of the stack and low level systems, as well as design broad distributed system interactions. You should value simplicity and scale, work comfortably in a collaborative, agile environment, and be excited to learn. About you: You work backward, starting from the user. You care about creating usable, useful software that solves real problems and brings delight to users. You have solid communication skills. You can clearly explain complex technical concepts. You work well with non-engineers. You can lead a conversation in a room with designers, engineers, and product managers. You are comfortable with ambiguity. You have a strong sense of ownership, and are able to drive development of new projects and features to completion. You are comfortable working at all levels of the stack. Minimum Qualifications: Bachelors degree in Computer Science, or equivalent experience 3+ years of experience shipping services software Strong knowledge of data structures, algorithms, operating systems, and distributed systems fundamentals. Working familiarity with networking protocols (TCP/IP, HTTP) and standard network architectures Knowledge of Internet protocols and standards, including SMTP, REST, SSL and HTTP Strong understanding of databases, NoSQL systems, storage and distributed persistence technologies Strong troubleshooting and performance tuning skills. Preferred qualifications: Masters degree in Computer Science Strong understanding of event streaming platforms like Apache Kafka Strong grasp of Kubernetes Experience in a start-up environment Experience delivering and operating large scale, highly available distributed systems Strong grasp of Unix-like operating systems Experience building multi-tenant, virtualized infrastructure a strong plus Design, develop, troubleshoot and debug software programs for databases, applications, tools, networks etc. Qualifications Career Level - IC3 About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.
Posted 1 week ago
3.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : PySpark Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. A typical day involves collaborating with team members to understand project needs, developing application features, and ensuring that the applications function seamlessly within the business environment. You will also engage in problem-solving discussions and contribute innovative ideas to enhance application performance and user experience. Roles & Responsibilities: - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work related problems. - Assist in the documentation of application processes and workflows. - Engage in code reviews to ensure quality and adherence to best practices. Professional & Technical Skills: - Must To Have Skills: Proficiency in PySpark. - Good To Have Skills: Experience with Apache Spark and data processing frameworks. - Strong understanding of data manipulation and transformation techniques. - Familiarity with cloud platforms such as AWS or Azure. - Experience in developing and deploying applications in a distributed environment. Additional Information: - The candidate should have minimum 3 years of experience in PySpark. - This position is based at our Pune office. - A 15 years full time education is required., 15 years full time education
Posted 1 week ago
3.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Job Description Oracle’s Cloud Infrastructure team is building services that operate at high scale in a broadly distributed multi-tenant cloud environment. Our customers run their businesses on our cloud, and our mission is to provide them with best-in-class compute, storage, networking, database, security, and an ever-expanding set of foundational cloud-based services. We’re looking for hands-on engineers with expertise and passion in solving difficult problems in distributed systems and highly available services. If this is you, at Oracle you can design and build innovative new systems from the ground up. These are exciting times in our space - we are growing fast, still at an early stage, and working on ambitious new initiatives. An engineer at any level can have significant technical and business impact. Responsibilities As a Senior Member of Technical Staff you will own the software design and development for major components of Oracle’s Cloud Infrastructure. You should be both a rock solid coder and a distributed systems generalist, able to dive deep into any part of the stack and low level systems, as well as design broad distributed system interactions. You should value simplicity and scale, work comfortably in a collaborative, agile environment, and be excited to learn. About you: You work backward, starting from the user. You care about creating usable, useful software that solves real problems and brings delight to users. You have solid communication skills. You can clearly explain complex technical concepts. You work well with non-engineers. You can lead a conversation in a room with designers, engineers, and product managers. You are comfortable with ambiguity. You have a strong sense of ownership, and are able to drive development of new projects and features to completion. You are comfortable working at all levels of the stack. Minimum Qualifications: Bachelors degree in Computer Science, or equivalent experience 3+ years of experience shipping services software Strong knowledge of data structures, algorithms, operating systems, and distributed systems fundamentals. Working familiarity with networking protocols (TCP/IP, HTTP) and standard network architectures Knowledge of Internet protocols and standards, including SMTP, REST, SSL and HTTP Strong understanding of databases, NoSQL systems, storage and distributed persistence technologies Strong troubleshooting and performance tuning skills. Preferred qualifications: Masters degree in Computer Science Strong understanding of event streaming platforms like Apache Kafka Strong grasp of Kubernetes Experience in a start-up environment Experience delivering and operating large scale, highly available distributed systems Strong grasp of Unix-like operating systems Experience building multi-tenant, virtualized infrastructure a strong plus Design, develop, troubleshoot and debug software programs for databases, applications, tools, networks etc. Qualifications Career Level - IC3 About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.
Posted 1 week ago
3.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Job Description Oacle’s Cloud Infrastructure team is building services that operate at high scale in a broadly distributed multi-tenant cloud environment. Our customers run their businesses on our cloud, and our mission is to provide them with best-in-class compute, storage, networking, database, security, and an ever-expanding set of foundational cloud-based services. We’re looking for hands-on engineers with expertise and passion in solving difficult problems in distributed systems and highly available services. If this is you, at Oracle you can design and build innovative new systems from the ground up. These are exciting times in our space - we are growing fast, still at an early stage, and working on ambitious new initiatives. An engineer at any level can have significant technical and business impact. Responsibilities RESPONSIBILITIES As a Senior Member of Technical Staff you will own the software design and development for major components of Oracle’s Cloud Infrastructure. You should be both a rock solid coder and a distributed systems generalist, able to dive deep into any part of the stack and low level systems, as well as design broad distributed system interactions. You should value simplicity and scale, work comfortably in a collaborative, agile environment, and be excited to learn. About you: You work backward, starting from the user. You care about creating usable, useful software that solves real problems and brings delight to users. You have solid communication skills. You can clearly explain complex technical concepts. You work well with non-engineers. You can lead a conversation in a room with designers, engineers, and product managers. You are comfortable with ambiguity. You have a strong sense of ownership, and are able to drive development of new projects and features to completion. You are comfortable working at all levels of the stack. Minimum Qualifications: Bachelors degree in Computer Science, or equivalent experience 3+ years of experience shipping services software Strong knowledge of data structures, algorithms, operating systems, and distributed systems fundamentals. Working familiarity with networking protocols (TCP/IP, HTTP) and standard network architectures Knowledge of Internet protocols and standards, including SMTP, REST, SSL and HTTP Strong understanding of databases, NoSQL systems, storage and distributed persistence technologies Strong troubleshooting and performance tuning skills. Preferred qualifications: Masters degree in Computer Science Strong understanding of event streaming platforms like Apache Kafka Strong grasp of Kubernetes Experience in a start-up environment Experience delivering and operating large scale, highly available distributed systems Strong grasp of Unix-like operating systems Experience building multi-tenant, virtualized infrastructure a strong plus Design, develop, troubleshoot and debug software programs for databases, applications, tools, networks etc. Qualifications Career Level - IC3 About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.
Posted 1 week ago
8.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Job Description: Senior Full Stack Developer Position: Senior Full Stack Developer Location: Gurugram Relevant Experience Required: 8+ years Employment Type: Full-time About The Role We are looking for a Senior Full Stack Developer who can build end-to-end web applications with strong expertise in both front-end and back-end development. The role involves working with Django, Node.js, React, and modern database systems (SQL, NoSQL, and Vector Databases), while leveraging real-time data streaming, AI-powered integrations, and cloud-native deployments. The ideal candidate is a hands-on technologist with a passion for modern UI/UX, scalability, and performance optimization. Key Responsibilities Front-End Development Build responsive and user-friendly interfaces using HTML5, CSS3, JavaScript, and React. Implement modern UI frameworks such as Next.js, Tailwind CSS, Bootstrap, or Material-UI. Create interactive charts and dashboards with D3.js, Recharts, Highcharts, or Plotly. Ensure cross-browser compatibility and optimize for performance and accessibility. Collaborate with designers to translate wireframes and prototypes into functional components. Back-End Development Develop RESTful & GraphQL APIs with Django/DRF and Node.js/Express. Design and implement microservices & event-driven architectures. Optimize server performance and ensure secure API integrations. Database & Data Management Work with structured (PostgreSQL, MySQL) and unstructured databases (MongoDB, Cassandra, DynamoDB). Integrate and manage Vector Databases (Pinecone, Milvus, Weaviate, Chroma) for AI-powered search and recommendations. Implement sharding, clustering, caching, and replication strategies for scalability. Manage both transactional and analytical workloads efficiently. Real-Time Processing & Visualization Implement real-time data streaming with Apache Kafka, Pulsar, or Redis Streams. Build live features (e.g., notifications, chat, analytics) using WebSockets & Server-Sent Events (SSE). Visualize large-scale data in real time for dashboards and BI applications. DevOps & Deployment Deploy applications on cloud platforms (AWS, Azure, GCP). Use Docker, Kubernetes, Helm, and Terraform for scalable deployments. Maintain CI/CD pipelines with GitHub Actions, Jenkins, or GitLab CI. Monitor, log, and ensure high availability with Prometheus, Grafana, ELK/EFK stack. Good To Have AI & Advanced Capabilities Integrate state-of-the-art AI/ML models for personalization, recommendations, and semantic search. Implement Retrieval-Augmented Generation (RAG) pipelines with embeddings. Work on multimodal data processing (text, image, and video). Preferred Skills & Qualifications Core Stack Front-End: HTML5, CSS3, JavaScript, TypeScript, React, Next.js, Tailwind CSS/Bootstrap/Material-UI Back-End: Python (Django/DRF), Node.js/Express Databases: PostgreSQL, MySQL, MongoDB, Cassandra, DynamoDB, Vector Databases (Pinecone, Milvus, Weaviate, Chroma) APIs: REST, GraphQL, gRPC State-of-the-Art & Advanced Tools Streaming: Apache Kafka, Apache Pulsar, Redis Streams Visualization: D3.js, Highcharts, Plotly, Deck.gl Deployment: Docker, Kubernetes, Helm, Terraform, ArgoCD Cloud: AWS Lambda, Azure Functions, Google Cloud Run Monitoring: Prometheus, Grafana, OpenTelemetry
Posted 1 week ago
6.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Job Description: Senior MLOps Engineer Position: Senior MLOps Engineer Location: Gurugram Relevant Experience Required: 6+ years Employment Type: Full-time About The Role We are seeking a Senior MLOps Engineer with deep expertise in Machine Learning Operations, Data Engineering, and Cloud-Native Deployments . This role requires building and maintaining scalable ML pipelines , ensuring robust data integration and orchestration , and enabling real-time and batch AI systems in production. The ideal candidate will be skilled in state-of-the-art MLOps tools , data clustering , big data frameworks , and DevOps best practices , ensuring high reliability, performance, and security for enterprise AI workloads. Key Responsibilities MLOps & Machine Learning Deployment Design, implement, and maintain end-to-end ML pipelines from experimentation to production. Automate model training, evaluation, versioning, deployment, and monitoring using MLOps frameworks. Implement CI/CD pipelines for ML models (GitHub Actions, GitLab CI, Jenkins, ArgoCD). Monitor ML systems in production for drift detection, bias, performance degradation, and anomaly detection. Integrate feature stores (Feast, Tecton, Vertex AI Feature Store) for standardized model inputs. Data Engineering & Integration Design and implement data ingestion pipelines for structured, semi-structured, and unstructured data. Handle batch and streaming pipelines with Apache Kafka, Apache Spark, Apache Flink, Airflow, or Dagster. Build ETL/ELT pipelines for data preprocessing, cleaning, and transformation. Implement data clustering, partitioning, and sharding strategies for high availability and scalability. Work with data warehouses (Snowflake, BigQuery, Redshift) and data lakes (Delta Lake, Lakehouse architectures). Ensure data lineage, governance, and compliance with modern tools (DataHub, Amundsen, Great Expectations). Cloud & Infrastructure Deploy ML workloads on AWS, Azure, or GCP using Kubernetes (K8s) and serverless computing (AWS Lambda, GCP Cloud Run). Manage containerized ML environments with Docker, Helm, Kubeflow, MLflow, Metaflow. Optimize for cost, latency, and scalability across distributed environments. Implement infrastructure as code (IaC) with Terraform or Pulumi. Real-Time ML & Advanced Capabilities Build real-time inference pipelines with low latency using gRPC, Triton Inference Server, or Ray Serve. Work on vector database integrations (Pinecone, Milvus, Weaviate, Chroma) for AI-powered semantic search. Enable retrieval-augmented generation (RAG) pipelines for LLMs. Optimize ML serving with GPU/TPU acceleration and ONNX/TensorRT model optimization. Security, Monitoring & Observability Implement robust access control, encryption, and compliance with SOC2/GDPR/ISO27001. Monitor system health with Prometheus, Grafana, ELK/EFK, and OpenTelemetry. Ensure zero-downtime deployments with blue-green/canary release strategies. Manage audit trails and explainability for ML models. Preferred Skills & Qualifications Core Technical Skills Programming: Python (Pandas, PySpark, FastAPI), SQL, Bash; familiarity with Go or Scala a plus. MLOps Frameworks: MLflow, Kubeflow, Metaflow, TFX, BentoML, DVC. Data Engineering Tools: Apache Spark, Flink, Kafka, Airflow, Dagster, dbt. Databases: PostgreSQL, MySQL, MongoDB, Cassandra, DynamoDB. Vector Databases: Pinecone, Weaviate, Milvus, Chroma. Visualization: Plotly Dash, Superset, Grafana. Tech Stack Orchestration: Kubernetes, Helm, Argo Workflows, Prefect. Infrastructure as Code: Terraform, Pulumi, Ansible. Cloud Platforms: AWS (SageMaker, S3, EKS), GCP (Vertex AI, BigQuery, GKE), Azure (ML Studio, AKS). Model Optimization: ONNX, TensorRT, Hugging Face Optimum. Streaming & Real-Time ML: Kafka, Flink, Ray, Redis Streams. Monitoring & Logging: Prometheus, Grafana, ELK, OpenTelemetry.
Posted 1 week ago
2.0 - 5.0 years
5 - 8 Lacs
Bengaluru
Work from Office
Job Description Summary Job Description Excited to grow your career We value our talented employees, and whenever possible strive to help one of our associates grow professionally before recruiting new talent to our open positions If you think the open position you see is right for you, we encourage you to apply! Our people make all the difference in our success, We are the makers of possible BD is one of the largest global medical technology companies in the world Advancing the world of health?is our Purpose, and its no small feat It takes the imagination and passion of all of us?from design and engineering to the manufacturing and marketing of our billions of MedTech products per year?to look at the impossible and find transformative solutions that turn dreams into possibilities, Position Summary For our Pharmacy Automation portfolio, we are looking for an experienced Full Stack Developer to contribute to a project that enhances pharmacy performance and device uptime through real-time analytics and detailed reporting, Pharmacy automation, involving robotics and automated systems, plays a vital role in dispensing, packaging, labeling, and organizing medications with precision and efficiency Our solution leverages Apache Kafka for streaming real-time events, applications hosted on Azure IoT-based edge appliances, and Kubernetes to manage our cloud-based applications In this role, you'll work with these cutting-edge technologies to optimize pharmacy operations, managing thousands of pharmacy automation robots to ensure quicker and more accurate service If you're intrigued by the intersection of healthcare and technology and have a background in any of these areas, we invite you to explore this unique opportunity with us, Our people make all the difference in our success, Responsibilities Work closely with cross-functional teams, including product management, design, and other engineering teams, to deliver robust and scalable solutions, Design, develop, and evolve highly scalable applications ensuring high performance and responsiveness, To build scalable cloud solutions (preferably Azure) and deploy applications with high availability, low latency, and scalability, Develop and maintain microservices for SaaS applications, ensuring modularity, scalability, and reusability Provide expertise and establish best practices on the development, monitoring, and maintenance of event-based technologies, Collaborate with other development team members to design and create interconnected systems Encourage innovation and continuously improve our ability to deliver quality solutions by evolving our engineering process and technical capabilities Debugging and performance analysis of deployed production systems Keep up to date on latest software development methods, language features and design philosophies to contribute to technology roadmap and manage tech debt work, Master our development process, culture, and code base, then think of ways to improve it and implement within the team, Requirements Bachelor's degree (or above) in computer science or related field Minimum of 4-9 years of professional experience Experience in system design, development, and delivery of highly scalable distributed multi-tenant SaaS products using Azure (preferred), Google Cloud or AWS Experience working in Agile/Scrum environment and standard methodologies like TDD, Code Reviews and ownership of Unit/Integration tests to ensure coverage and quality Experience in SaaS development Experience with coding in C#, Dot net and SQL (MSSQL or Postgres) Experience in Azure DevOps/GIT experience with familiarity of CI/CD build and release pipelines Knowledge of monitoring tools within Azure or AWS Experience with Docker, Kubernetes, and cloud deployment technologies Experience with Single Page Applications Angular, Typescript, HTML, CSS, NodeJS Strong fundamentals of data structures, algorithms, design patterns and programming, Excellent problem solving and troubleshooting skills, Experience with Agile development processes and tools, Good to have Experience in Automated Testing is preferred Experience with Kafka Connect framework Experience with IOT Edge Experience in Healthcare or pharmacy automation industry is a plus Why Join us A career at BD means learning and working alongside inspirational leaders and colleagues who are equally passionate and committed to fostering an inclusive, growth-centered, and rewarding culture You will have the opportunity to help shape the trajectory of BD while leaving a legacy at the same time, To find purpose in the possibilities, we need people who can see the bigger picture, who understand the human story that underpins everything we do We welcome people with the imagination and drive to help us reinvent the future of health At BD, youll discover a culture in which you can learn, grow and thrive And find satisfaction in doing your part to make the world a better place, Become a maker of possible with us! Required Skills Optional Skills Primary Work Location IND Bengaluru Technology Campus Additional Locations Work Shift Show
Posted 1 week ago
30.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
About Temenos Temenos powers a world of banking that creates opportunities for billions of people and businesses everywhere. We have been doing this for over 30 years through the pioneering spirit of our Temenosians who are passionate about making banking better, together. We serve over 3000 clients from the largest to challengers and community banks in 150+ countries. We collaborate with clients to build new banking services and state-of-the-art customer experiences on our open banking platform, helping them operate more sustainably. At Temenos, we have an open-minded and inclusive culture, where everyone has the power to create their own destiny and make a positive contribution to the world of banking and society. THE ROLE We are seeking an experienced Technology Expert / Senior Technical Specialist to join our GDC team. This role is ideal for a seasoned professional with strong technical expertise in Temenos Transact (T24), JBC, Java, and Python OPPORTUNITIES You will play a key role in translating client requirements into robust technical designs. You will develop and deliver high-quality custom code and configurations for Temenos Transact implementations. You will collaborate closely with functional consultants, business analysts, and client teams to deliver high-quality solutions. You will design, code, and perform unit testing of new functionality as well as support activities. You will adhere to standard development practices and procedures (Agile), following defined coding standards throughout the development phase. Skills You should have strong expertise in Java and Python programming. Knowledge in JBC is an added advantage You should have proven experience with REST APIs and Microservices. You should have familiarity with Apache Camel and data streaming platforms is a strong advantage. You should have deep functional and technical knowledge of AA (Arrangement Architecture) and Payments Modules in Temenos Transact VALUES Care about transforming the banking landscape. Commit to being part of an exciting culture and product evolving within the financial industry. Collaborate effectively and proactively with teams within or outside Temenos Challenge yourself to be ambitious and achieve your individual as well as the company targets SOME OF OUR BENEFITS include: Maternity leave: Transition back with 3 days per week in the first month and 4 days per week in the second month Civil Partnership: 1 week of paid leave if you're getting married. This covers marriages and civil partnerships, including same sex/civil partnership Family care: 4 weeks of paid family care leave Recharge days: 4 days per year to use when you need to physically or mentally needed to recharge Study leave: 2 weeks of paid leave each year for study or personal development Please make sure to read our Recruitment Privacy Policy
Posted 1 week ago
1.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
JOB_POSTING-3-72927 Job Description Role Title: Analyst, Data Sourcing – Metadata (L08) Company Overview : Synchrony (NYSE: SYF) is a premier consumer financial services company delivering one of the industry’s most complete digitally enabled product suites. Our experience, expertise and scale encompass a broad spectrum of industries including digital, health and wellness, retail, telecommunications, home, auto, outdoors, pet and more. We have recently been ranked #2 among India’s Best Companies to Work for by Great Place to Work. We were among the Top 50 India’s Best Workplaces in Building a Culture of Innovation by All by GPTW and Top 25 among Best Workplaces in BFSI by GPTW. We have also been recognized by AmbitionBox Employee Choice Awards among the Top 20 Mid-Sized Companies, ranked #3 among Top Rated Companies for Women, and Top-Rated Financial Services Companies. Synchrony celebrates ~52% women talent. We offer Flexibility and Choice for all employees and provide best-in-class employee benefits and programs that cater to work-life integration and overall well-being. We provide career advancement and upskilling opportunities, focusing on Advancing Diverse Talent to take up leadership roles Organizational Overview Our Analytics organization comprises of data analysts who focus on enabling strategies to enhance customer and partner experience and optimize business performance through data management and development of full stack descriptive to prescriptive analytics solutions using cutting edge technologies thereby enabling business growth. Role Summary/Purpose The Analyst, Data Sourcing - Metadata (Individual Contributor) role is located in the India Analytics Hub (IAH) as part of Synchrony’s enterprise Data Office. This role is responsible for supporting metadata management processes within Synchrony’s Public and Private cloud and on-prem environments within the Chief Data Office. This role focuses on assisting with metadata harvesting, maintaining data dictionaries, and supporting the tracking of data lineage. The analyst will collaborate closely with senior team members to ensure access to accurate, well-governed metadata for analytics and reporting. Key Responsibilities Implement and maintain metadata management processes across Synchrony’s Public and Private cloud and on-prem environments, ensuring accurate integration with technical and business Metadata catalogs. Work with the Data Architecture and Data Usage teams to track data lineage, traceability, and compliance, identifying and escalating metadata-related issues. Document technical specifications, support solution design, participate in agile development, and release cycles for metadata initiatives. Adhere to data management policies, track KPIs for Metadata effectiveness and assist in assessment of metadata risks to strengthen governance. Maintain stable operations, troubleshoot metadata and lineage issues, and contribute to continuous process improvements to improve data accessibility. Required Skills & Knowledge Bachelor’s Degree, preferably in Engineering or Computer Science with more than 1 years’ hands-on Data Management experience or in lieu of a degree with more than 3 years’ experience. Minimum of 1 years’ experience in data management, focusing on metadata management, data governance, or data lineage, with exposure to cloud environments (AWS, Azure, or Google Cloud) and on-premise infrastructure. Basic understanding of metadata management concepts, familiarity with data cataloging tools (e.g., AWS Glue Data Catalog, AbInitio, Collibra), basic proficiency in data lineage tracking tools (e.g., Apache Atlas, AbInitio, Collibra), and understanding of data integration technologies (e.g., ETL, APIs, data pipelines). Good communication and collaboration skills, strong analytical thinking and problem-solving abilities, ability to work independently and manage multiple tasks, and attention to detail. Desired Skills & Knowledge AWS certifications such as AWS Cloud practitioner, AWS Certified Data Analytics – Specialty Preferred Qualifications Familiarity with hybrid cloud environments (combination of cloud and on-prem). Skilled in Ab Initio Metahub development and support including importers, extractors, Metadata Hub database extensions, technical lineage, QueryIT, Ab Initio graph development, Ab Initio Control center and Express IT Experience with harvesting technical lineage and producing lineage diagrams. Familiarity with Unix, Linux, Stonebranch, and familiarity with database platforms such as Oracle and Hive Basic knowledge of SQL and data query languages for managing and retrieving metadata. Understanding of data governance frameworks (e.g., EDMC DCAM, GDPR compliance). Familiarity with Collibra Eligibility Criteria: Bachelor’s Degree, preferably in Engineering or Computer Science with more than 1 years’ hands-on Data Management experience or in lieu of a degree with more than 3 years’ experience. Work Timings: This role qualifies for Enhanced Flexibility and Choice offered in Synchrony India and will require the incumbent to be available between 06:00 AM Eastern Time – 11:30 AM Eastern Time (timings are anchored to US Eastern hours and will adjust twice a year locally). This window is for meetings with India and US teams. The remaining hours will be flexible for the employee to choose. Exceptions may apply periodically due to business needs. Please discuss this with the hiring manager for more details. For Internal Applicants Understand the criteria or mandatory skills required for the role, before applying Inform your manager and HRM before applying for any role on Workday Ensure that your professional profile is updated (fields such as education, prior experience, other skills) and it is mandatory to upload your updated resume (Word or PDF format) Must not be any corrective action plan (First Formal/Final Formal, LPP) L4 to L7 Employees who have completed 12 months in the organization and 12 months in current role and level are only eligible. L8 Employees who have completed 18 months in the organization and 12 months in current role and level are only eligible. L04+ Employees can apply Grade/Level: 08 Job Family Group Information Technology
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. Data Engineer What You Will Do Let’s do this. Let’s change the world. In this vital role you will be responsible for "Run" and "Build" project portfolio execution, collaborate with business partners and other IS service leads to deliver IS capability and roadmap in support of business strategy and goals. Real world data analytics, visualization and advanced technology play a vital role in supporting Amgen’s industry leading innovative Real World Evidence approaches. The role is responsible for designing, building, maintaining, analyzing, and interpreting data to provide actionable insights that drive business decisions. This role involves working with large datasets, developing reports, supporting and executing data governance initiatives and visualizing data to ensure data is accessible, reliable, and efficiently managed. The ideal candidate has strong technical skills, experience with big data technologies, and a deep understanding of data architecture and ETL processes Roles & Responsibilities: Design, develop, and maintain data solutions for data generation, collection, and processing Be a key team member that assists in design and development of the data pipeline Create data pipelines and ensure data quality by implementing ETL processes to migrate and deploy data across systems Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions Take ownership of data pipeline projects from inception to deployment, manage scope, timelines, and risks Collaborate with cross-functional teams to understand data requirements and design solutions that meet business needs Develop and maintain data models, data dictionaries, and other documentation to ensure data accuracy and consistency Implement data security and privacy measures to protect sensitive data Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions Collaborate with Data Architects, Business SMEs, and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions Identify and resolve complex data-related challenges Adhere to best practices for coding, testing, and designing reusable code/component Explore new tools and technologies that will help to improve ETL platform performance Participate in sprint planning meetings and provide estimations on technical implementation Collaborate and communicate effectively with product teams What We Expect Of You We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Master's degree / Bachelor's degree and 5 to 9 years' of experience in Computer Science, IT or related field Must-Have Skills: Hands on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL), workflow orchestration, performance tuning on big data processing Hands on experience with various Python/R packages for EDA, feature engineering and machine learning model training Proficiency in data analysis tools (eg. SQL) and experience with data visualization tools Excellent problem-solving skills and the ability to work with large, complex datasets Strong understanding of data governance frameworks, tools, and best practices. Knowledge of data protection regulations and compliance requirements (e.g., GDPR, CCPA) Preferred Qualifications: Good-to-Have Skills: Experience with ETL tools such as Apache Spark, and various Python packages related to data processing, machine learning model development Strong understanding of data modeling, data warehousing, and data integration concepts Knowledge of Python/R, Databricks, SageMaker, OMOP. Professional Certifications: Certified Data Engineer / Data Analyst (preferred on Databricks or cloud environments) Certified Data Scientist (preferred on Databricks or Cloud environments) Machine Learning Certification (preferred on Databricks or Cloud environments) SAFe for Teams certification (preferred) Soft Skills: Excellent critical-thinking and problem-solving skills Strong communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated presentation skills What You Can Expect Of Us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
Posted 1 week ago
2.0 - 6.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. Associate Data Engineer What You Will Do Let’s do this. Let’s change the world. In this vital role we seek a skilled Data Engineer to build and optimize our data infrastructure. As a key contributor, you will collaborate closely with cross-functional teams to design and implement robust data pipelines that efficiently extract, transform, and load data into our AWS-based data lake and data warehouse. Your expertise will be instrumental in empowering data-driven decision making through advanced analytics and predictive modeling. Roles & Responsibilities: Building and optimizing data pipelines, data warehouses, and data lakes on the AWS and Databricks platforms. Managing and maintaining the AWS and Databricks environments. Ensuring data integrity, accuracy, and consistency through rigorous quality checks and monitoring. Maintain system uptime and optimal performance Working closely with cross-functional teams to understand business requirements and translate them into technical solutions. Exploring and implementing new tools and technologies to enhance ETL platform performance. What We Expect Of You We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Bachelor’s degree and 2 to 6 years. Functional Skills: Must-Have Skills: Proficient in SQL for extracting, transforming, and analyzing complex datasets from both relational and columnar data stores. Proven ability to optimize query performance on big data platforms. Proficient in leveraging Python, PySpark, and Airflow to build scalable and efficient data ingestion, transformation, and loading processes. Ability to learn new technologies quickly. Strong problem-solving and analytical skills. Excellent communication and teamwork skills. Good-to-Have Skills: Experienced with SQL/NOSQL database, vector database for large language models Experienced with data modeling and performance tuning for both OLAP and OLTP databases Experienced with Apache Spark, Apache Airflow Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops Experienced with AWS, GCP or Azure cloud services What You Can Expect Of Us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
Posted 1 week ago
5.0 - 9.0 years
11 - 15 Lacs
Hyderabad
Work from Office
Career Category Engineering Job Description Join Amgen s Mission of Serving Patients At Amgen, if you feel like you re part of something bigger, it s because you are. Our shared mission to serve patients living with serious illnesses drives all that we do. Since 1980, we ve helped pioneer the world of biotech in our fight against the world s toughest diseases. With our focus on four therapeutic areas Oncology, Inflammation, General Medicine, and Rare Disease we reach millions of patients each year. As a member of the Amgen team, you ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. Sr Associate IS Architect What you will do Let s do this. Let s change the world. In this vital role you will be responsible for designing, building, maintaining, analyzing, and interpreting data to deliver actionable insights that drive business decisions. This role involves working with large datasets, developing reports, supporting and performing data governance initiatives and, visualizing data to ensure data is accessible, reliable, and efficiently managed. The ideal candidate has deep technical skills, experience with big data technologies, and a deep understanding of data architecture and ETL processes Design, develop, and maintain data solutions for data generation, collection, and processing Be a key team member that assists in design and development of the data pipeline Standup and enhance BI reporting capabilities through Cognos, PowerBI or similar tools. Create data pipelines and ensure data quality by implementing ETL processes to migrate and deploy data across systems Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions Take ownership of data pipeline projects from inception to deployment, manage scope, timelines, and risks Collaborate with multi-functional teams to understand data requirements and design solutions that meet business needs Develop and maintain data models, data dictionaries, and other documentation to ensure data accuracy and consistency Implement data security and privacy measures to protect sensitive data Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions Collaborate and communicate effectively with product teams Collaborate with Data Architects, Business SMEs, and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions Adhere to best practices for coding, testing, and designing reusable code/component Explore new tools and technologies that will help to improve ETL platform performance Participate in sprint planning meetings and provide estimations on technical implementatio What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Masters degree / Bachelors degree with 5- 9 years of experience in Computer Science, IT or related field Functional Skills: Must-Have Skills Proficiency in Python, PySpark, and Scala for data processing and ETL (Extract, Transform, Load) workflows, with hands-on experience in using Databricks for building ETL pipelines and handling big data processing Experience with data warehousing platforms such as Amazon Redshift, or Snowflake. Strong knowledge of SQL and experience with relational (e. g. , PostgreSQL, MySQL) databases. Familiarity with big data frameworks like Apache Hadoop, Spark, and Kafka for handling large datasets. Experience in BI reporting tools such as Cognos, PowerBI and/or Tableau Experienced with software engineering best-practices, including but not limited to version control (GitLab, Subversion, etc. ), CI/CD (Jenkins, GITLab etc. ), automated unit testing, and Dev Ops Good-to-Have Skills: Experience with cloud platforms such as AWS particularly in data services (e. g. , EKS, EC2, S3, EMR, RDS, Redshift/Spectrum, Lambda, Glue, Athena) Experience with Anaplan platform, including building, managing, and optimizing models and workflows including scalable data integrations Understanding of machine learning pipelines and frameworks for ML/AI models Professional Certifications: AWS Certified Data Engineer (preferred) Databricks Certified (preferred) Soft Skills: Excellent critical-thinking and problem-solving skills Strong communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated presentation skills What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers. amgen. com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation. .
Posted 1 week ago
9.0 - 14.0 years
7 - 11 Lacs
Bengaluru
Work from Office
Role Overview: As a Software Development Engineer, you will be an integral part of the Data Protection Group in India, developing cross-platform endpoint applications for Windows and Linux. To be successful in this role you should have exceptional technical skills, communication, and project management skills with multiple years of designing and implementing Enterprise class products along with ability to work in a team toward achieving organizational goals. In this position, you will be involved in all aspects of product development lifecycle - requirements discussion/analysis, design, scope estimation, planning, implementation, code reviews and unit testing, documentation, POCs, deployment and continuous engineering. You will also be responsible for release deployment and supporting customers using the products in production. Ideal candidate will foster a culture of innovation while displaying exemplary technical expertise, ownership, and commitment to delivering high quality endpoint security solutions on a variety of desktops. For you to be successful in this role, you need excellent debugging and development skills in Java. About the role : Design and development of breakthrough multiplatform software for securing endpoints on a variety of desktop and cloud platforms Gather technical requirements and specifications from customers and business stakeholders and develop technical specifications according to which solutions are defined and delivered. Deliver solutions that meet the timeline, quality and costs for the projects and deliverable elements. Solutions must meet the preset goals for quality, security, and performance. About You: The ideal candidate will have 9+ years years of experience in Java web application software development. (Java 1.8 and above) Strong Leadership skill qualities. Expert at Spring and Hibernate frameworks. Proficiency in application server like Tomcat Troubleshooting Java web application with Tomcat HSQL Oracle MSSQL Two are mandatory. Knowledge on Apache Tapestry framework. UI technologies Java Script HTML Style sheets Cloud Stack in any one of cloud platforms like AWS/Azure/GCP are preferred.
Posted 1 week ago
8.0 - 13.0 years
7 - 11 Lacs
Bengaluru
Work from Office
About the Role: Lead the design, development, and deployment of large-scale software systems in Python and Go. Understanding of data pipelines and event/log processing eg., syslog, JSON. Protobuf/MsgPack, gRPC, Apache Kafka, Pulsar, Red Panda, RabbitMQ, etc. Own end-to-end product features, from initial design through to production, with a focus on high-quality, maintainable code. Architect scalable, reliable, and secure software solutions with a focus on performance and usability. Contribute to system design decisions, optimizing for scalability, availability, and performance. Mentor and guide junior engineers, providing technical leadership and fostering a culture of excellence. Integrate with CI/CD pipelines, continuously improving and optimizing them for faster and more reliable software releases. Conduct code reviews to ensure best practices in coding, testing, and design patterns. Troubleshoot, debug, and resolve complex technical issues in production and development environments. About You : 8+ years of professional software development experience. Expertise in Golang and Python and design patterns. Hands-on experience with system design, architecture, and scaling of complex systems. Strong exposure to CI/CD practices and tools (e.g., ArgoCD, Github Actions). Deep knowledge of Kubernetes, e.g., CRD, Helm, Kustomize, design and implementation of k8s Operators Familiarity with infrastructure as code tools (e.g., Terraform, CloudFormation). Good understanding of Networking and storage e.g., Load balancers, proxies. Experience working in cloud environments (e.g., AWS, Azure, GCP) and containerization technologies (e.g., Docker, Kubernetes). Proficient in database design and optimization, with experience in both SQL and NoSQL databases (Eg., OpenSearch, ClickHouse, Apache Iceberg) Proven experience in Agile methodologies and working in cross-functional teams. Excellent problem-solving skills with the ability to break down complex problems into manageable solutions
Posted 1 week ago
4.0 - 7.0 years
5 - 8 Lacs
Pune
Work from Office
Critical Skills to Possess: Required Qualifications: Minimum 5 years of experience with Red Hat Enterprise Linux, Solaris, and HPUX OS administration. Strong knowledge of system security, patch management, and performance tuning. Experience with Azure IaaS/PaaS services and hybrid cloud integration. Proficiency in Veritas NetBackup administration and recovery procedures. Hands-on experience with Apache HTTP Server, Oracle WebLogic, and Apache Tomcat. Familiarity with scripting (Shell, Python, or Perl) for automation and monitoring. Excellent troubleshooting skills and ability to work independently or in a team. Preferred Qualifications: Experience with configuration management tools (e.g., Ansible). Familiarity with ITIL practices and change management processes. Preferred Qualifications: BS degree in Computer Science or Engineering or equivalent experience Roles and Responsibilities Roles and Responsibilities: Perform installation, configuration, and maintenance of Red Hat and Solaris servers in production and non-production environments. Manage operating system upgrades and security patching cycles to ensure compliance and system integrity. Troubleshoot and resolve complex issues related to OS, middleware (Apache, WebLogic, Tomcat), and integrated applications. Collaborate with application teams to support deployments and performance tuning. Implement and maintain backup and recovery strategies using Veritas NetBackup. Support hybrid cloud infrastructure, particularly Azure-based workloads, and services. Monitor system performance, automate routine tasks, and maintain system documentation. Participate in on-call rotation and provide extended after-hours support as needed.
Posted 1 week ago
8.0 - 13.0 years
0 Lacs
Pune
Remote
Role & responsibilities: Outline the day-to-day responsibilities for this role. Preferred candidate profile: Specify required role expertise, previous job experience, or relevant certifications.
Posted 1 week ago
4.0 years
0 Lacs
Gurgaon, Haryana, India
Remote
About This Role What are Aladdin and Aladdin Engineering? You will be working on BlackRock's investment operating system called Aladdin, which is used both internally within BlackRock and externally by many financial institutions. Aladdin combines sophisticated risk analytics with comprehensive portfolio management, trading, and operations tools on a single platform. It powers informed decision-making and creates a connective tissue for thousands of users investing worldwide. Our development teams are part of Aladdin Engineering. We collaborate to build the next generation of technology that transforms the way information, people, and technology intersect for global investment firms. We build and package tools that manage trillions in assets and support millions of financial instruments. We perform risk calculations and process millions of transactions for thousands of users worldwide every day. Your Team The Database Hosting Team is a key part of Platform Hosting Services , which operates under the broader Aladdin Engineering group. Hosting Services is responsible for managing the reliability, stability, and performance of the firm's financial systems, including Aladdin, and ensuring its availability to our business partners and customers. We are a globally distributed team, spanning multiple regions, providing engineering and operational support for online transaction processing, data warehousing, data replication, and distributed data processing platforms. Your Role And Impact Data is the backbone of any world-class financial institution. The Database Operations Team ensures the resiliency and integrity of that data while providing instantaneous access to a large global user base at BlackRock and across many institutional clients. As specialists in database technology, our team is involved in every aspect of system design, implementation, tuning, and monitoring, using a wide variety of industry-leading database technologies. We also develop code to provide analysis, insights, and automate our solutions at scale. Although our specialty is database technology, to excel in our role, we must understand the environment in which our technology operates. This includes understanding the business needs, application server stack, and interactions between database software, operating systems, and host hardware to deliver the best possible service. We are passionate about performance and innovation. At every level of the firm, we embrace diversity and offer flexibility to enhance work-life balance. Your Responsibilities The role involves providing operations, development, and project support within the global database environment across various platforms. Key responsibilities include: Operational Support for Database Technology: Engineering, administration, and operations of OLTP, OLAP, data warehousing platforms, and distributed No-SQL systems. Collaboration with infrastructure teams, application developers, and business teams across time zones to deliver high-quality service to Aladdin users. Automation and development of database operational, monitoring, and maintenance toolsets to achieve scalability and efficiency. Database configuration management, capacity and scale management, schema releases, consistency, security, disaster recovery, and audit management. Managing operational incidents, conducting root-cause analysis, resolving critical issues, and mitigating future risks. Assessing issues for severity, troubleshooting proactively, and ensuring timely resolution of critical system issues. Escalating outages when necessary, collaborating with Client Technical Services and other teams, and coordinating with external vendors for support. Project-Based Participation: Involvement in major upgrades and migration/consolidation exercises. Exploring and implementing new product features. Contributing to performance tuning and engineering activities. Contributing to Our Software Toolset: Enhancing monitoring and maintenance utilities in Perl, Python, and Java. Contributing to data captures to enable deeper system analysis. Qualifications B.E./B.Tech/MCA or another relevant engineering degree from a reputable university. 4+ years of proven experience in Data Administration or a similar role. Skills And Experience Enthusiasm for acquiring new technical skills. Effective communication with senior management from both IT and business areas. Understanding of large-scale enterprise application setups across data centers/cloud environments. Willingness to work weekends on DBA activities and shift hours. Experience with database platforms like SAP Sybase, Microsoft SQL Server, Apache Cassandra, Cosmos DB, PostgreSQL, and data warehouse platforms such as Snowflake, Greenplum. Exposure to public cloud platforms such as Microsoft Azure, AWS, and Google Cloud. Knowledge of programming languages like Python, Perl, Java, Go; automation tools such as Ansible/AWX; source control systems like GIT and Azure DevOps. Experience with operating systems like Linux and Windows. Strong background in supporting mission-critical applications and performing deep technical analysis. Flexibility to work with various technologies and write high-quality code. Exposure to project management. Passion for interactive troubleshooting, operational support, and innovation. Creativity and a drive to learn new technologies. Data-driven problem-solving skills and a desire to scale technology for future needs. Operating Systems: Familiarity with Linux/Windows. Proficiency with shell commands (grep, find, sed, awk, ls, cp, netstat, etc.). Experience checking system performance metrics like CPU, memory, and disk usage on Unix/Linux. Other Personal Characteristics Integrity and the highest ethical standards. Ability to quickly adjust to complex data and information, displaying strong learning agility. Self-starter with a commitment to superior performance. Natural curiosity and a desire to always learn. If this excites you, we would love to discuss your potential role on our team! Our Benefits To help you stay energized, engaged and inspired, we offer a wide range of benefits including a strong retirement plan, tuition reimbursement, comprehensive healthcare, support for working parents and Flexible Time Off (FTO) so you can relax, recharge and be there for the people you care about. Our hybrid work model BlackRock’s hybrid work model is designed to enable a culture of collaboration and apprenticeship that enriches the experience of our employees, while supporting flexibility for all. Employees are currently required to work at least 4 days in the office per week, with the flexibility to work from home 1 day a week. Some business groups may require more time in the office due to their roles and responsibilities. We remain focused on increasing the impactful moments that arise when we work together in person – aligned with our commitment to performance and innovation. As a new joiner, you can count on this hybrid model to accelerate your learning and onboarding experience here at BlackRock. About BlackRock At BlackRock, we are all connected by one mission: to help more and more people experience financial well-being. Our clients, and the people they serve, are saving for retirement, paying for their children’s educations, buying homes and starting businesses. Their investments also help to strengthen the global economy: support businesses small and large; finance infrastructure projects that connect and power cities; and facilitate innovations that drive progress. This mission would not be possible without our smartest investment – the one we make in our employees. It’s why we’re dedicated to creating an environment where our colleagues feel welcomed, valued and supported with networks, benefits and development opportunities to help them thrive. For additional information on BlackRock, please visit @blackrock | Twitter: @blackrock | LinkedIn: www.linkedin.com/company/blackrock BlackRock is proud to be an Equal Opportunity Employer. We evaluate qualified applicants without regard to age, disability, family status, gender identity, race, religion, sex, sexual orientation and other protected attributes at law.
Posted 1 week ago
3.0 - 7.0 years
5 - 9 Lacs
Thane
Work from Office
Job purposeProvide advanced technical support, analysis, and recommended fix for complex issues on phone calls,ticket system through emails, Online Chat and also onsite visits at client premises.Duties and responsibilities Manage multiple critical and high visibility issues. Responsible for the triaging, analysing and troubleshooting live issues. Provide day-to-day technical support and ensure that processes are followed and all critical issues are analysed in timely manner. Suggest alternatives in order to optimize and/or resolve frequent customer complaints for existing product features based on customer feedbacks. Develop an in-depth understanding of the products and expertise on the applications Self-motivated and highly professional with ability to lead and take ownership and responsibility Fast learner, energetic and enthusiastic Positive can-do attitude Team playerWorking conditions Effective working relationships with all functional units of the organization. Able to work independently or as part of a team. Effective communication skills; verbal, non-verbal and written Communication on all levels of the organization. Shift rotation will be 24 x 7 (If required) Domestic & International client base handlingDirect reports Work as a part of a team led by the Asst Manager to whom the executives will report.Qualifications Minimum Graduate from any stream Specialized Knowledge Expertise in LINUXProfessional Certification : RHEL CertificationSkills Strong in Operating system like RHEL, Cent OS, Ubuntu Strong debug/troubleshooting skills Advanced skill in configuring and supporting software applications like Postfix, Squid, Apache, Mysql, Jabber, IP Tables, Cyrus, Mail Scanner, LDAP, DNS, Nginx , Redis , Clickhouse , Azure , AWS , Bash Scripting.Experience : Experience with being part of a technical department and experience in Mailing Service industry is preferable. Other characteristics such as personal characteristics Analytical, Work under strict timelines, Punctual & Positive attitude.Any Other Willing to work in rotational shifts on weekdays, weekends & holidays.Why Netcore CloudBeing first is in our nature. Netcore cloud is first and leading AI/ML-powered customer engagement andexperience platform (CEE) that helps B2C brands increase engagement, conversions, revenue and retention.Our cutting-edge SaaS products enable personalized engagement across the entire customer journey andbuild amazing digital experiences for business of all sizes.Netcore s Engineering team focus on adoption, scalability, complex challenges, and fastest processing. Weuse versatile tech stacks like Streaming technologies and queue management systems like Kafka, Storm,Rabbitmq, Celery, RedisQ.Netcore has a perfect combo of experience as well as an agile mind. We currently work with 5000+enterprise brands across 18 countries and serve more than 70% Unicorns in India, making us among thetop-rated customer engagement & experience platform.Headquartered in Mumbai, we have our global footprints across 10 countries worldwide including UnitedStates and Germany. Being certified as a Great Place to Work for three consecutive years only reinforcesNetcore s principle of being a people-centric company where you will not be just an employee but afamily member of the organization.A career at Netcore is more than just a job it s an opportunity to shape the future. For more information,please visit netcorecloud.com or follow us on LinkedIn.What s in it for youImmense growth, continuous learning and deliver the best to the top-notch brandsWork with some of the most innovative brainsOpportunity to explore your entrepreneurial mind-setOpen culture where your creative bug gets activated.If this sounds like a company you would like to be a part of, and a role you would thrive in, please don thold back from applying! We need your unique perspective for our continued innovation and success!So let s converse! Our inquisitive nature is all keen to know more about you.
Posted 1 week ago
7.0 - 12.0 years
5 - 9 Lacs
Hyderabad, Chennai, Bengaluru
Work from Office
We are looking for an experienced Search Developer skilled in Java and Apache SOLR to design, develop, and maintain high-performance, scalable search solutions for enterprise or consumer-facing applications. The ideal candidate will work closely with cross-functional teams to optimize search relevance, speed, and reliability while handling large, complex datasets. Essential functions Key Responsibilities Design, implement, and optimize search applications and services using Java and Apache SOLR. Develop and maintain SOLR schemas, configurations, indexing pipelines, and query optimization for datasets often exceeding 100 million documents. Build and enhance scalable RESTful APIs and microservices around search functionalities. Work with business analysts and stakeholders to gather search requirements and improve user experience through advanced search features such as faceting, filtering, and relevance tuning. Perform SOLR cluster management, including sharding, replication, scaling, and backup/recovery operations. Monitor application performance, troubleshoot issues, and implement fixes to ensure system stability and responsiveness. Integrate SOLR with relational and NoSQL databases, streaming platforms, and ETL processes. Participate in code reviews, adopt CI/CD processes, and contribute to architectural decisions. Stay updated with latest developments in SOLR, Java frameworks, and search technologies. Qualifications Required Skills & Qualifications Bachelor s or Master s degree in Computer Science, Engineering, or a related discipline. 7+ years of hands-on experience in Java development, including frameworks like Spring and Hibernate. 3+ years of solid experience working with Apache SOLR, including SOLRCloud, schema design, indexing, query parsing, and search tuning. Strong knowledge of search technologies (Lucene, Solr) and experience managing large-scale search infrastructures. Experience in RESTful API design and microservices architecture. Familiarity with SQL and NoSQL databases. Ability to write efficient, multi-threaded, and distributed system code. Strong problem-solving skills and debugging expertise. Experience with version control (Git), build tools (Maven/Gradle), and CI/CD pipelines (Jenkins, GitHub Actions). Understanding of Agile/Scrum software development methodologies. Excellent communication skills and ability to collaborate with cross-functional teams. Would be a plus Preferred Skills Experience with other search platforms like Elasticsearch is a plus. Knowledge of cloud platforms (AWS, Azure, GCP), containerization (Docker, Kubernetes). Familiarity with streaming platforms such as Kafka. Exposure to analytics and machine learning for search relevance enhancement. Prior experience in large-scale consumer web or e-commerce search applications. We offer Opportunity to work on bleeding-edge projects Work with a highly motivated and dedicated team Competitive salary Flexible schedule Benefits package - medical insurance, sports Corporate social events Professional development opportunities Well-equipped office
Posted 1 week ago
5.0 - 8.0 years
25 - 30 Lacs
Chennai
Work from Office
Job Purpose Design and develop end-to-end software solutions that power innovative products at Trimble. Leverage your expertise in C#, ASP.NET (Framework/Core), Web API, Angular, and Microsoft Azure services to build scalable and high-performance web applications. This role involves hands-on full-stack development, including responsive front-end UI, robust server-side logic, and secure, cloud-integrated backend services. You will work in an Agile team environment, collaborating with cross-functional teams to deliver impactful digital solutions while maintaining high code quality, performance, and security standards. Primary Responsibilities Understand high-level product and technical requirements and convert them into scalable full-stack software designs. Develop server-side applications using C#, ASP.NET Core/Framework, Web API , and Entity Framework . Build intuitive and responsive front-end interfaces using Angular, JavaScript, HTML, and CSS . Design, develop, and maintain RESTful APIs , including OData APIs , ensuring proper versioning and security. Integrate authentication and authorization mechanisms using industry standards. Work with Microsoft SQL Server for designing schemas, writing queries, and optimizing performance. Build microservices and modular web components adhering to best practices. Develop and deploy Azure Functions , utilize Azure Service Bus , and manage data using Azure Storage . Integrate with messaging systems such as Apache Kafka for distributed event processing. Contribute to CI/CD workflows, manage source control using Git , and participate in code reviews and team development activities. Write and maintain clean, well-documented, and testable code with unit and integration test coverage. Troubleshoot and resolve performance, scalability, and maintainability issues across the stack. Support production deployments and maintain operational excellence for released features. Stay current with evolving technologies and development practices to improve team efficiency and product quality. Skills and Background Strong proficiency in C# and .NET Framework 4.x / .NET Core Solid experience in ASP.NET MVC / ASP.NET Core , Web API , and Entity Framework / EF Core Knowledge of OData APIs, REST principles, and secure web communication practices Front-end development experience using JavaScript, Angular (preferred), HTML5, CSS3 Proficient with Microsoft SQL Server including query tuning, indexing, and stored procedures Experience with Authentication & Authorization (OAuth, JWT, Claims-based Security) Experience building microservices and using Web Services Hands-on with Azure Functions, Azure Service Bus, and Azure Storage Experience integrating and processing messages using Apache Kafka Knowledge of source control systems like Git , and experience in Agile development environments Exposure to unit testing frameworks, integration testing, and DevOps practices Ability to write clean, maintainable, and well-structured code Excellent problem-solving, debugging, and troubleshooting skills Strong communication and collaboration skills Work Experience 5 8 years of experience as a Full Stack Engineer or Software Developer Proven experience delivering scalable web applications and services in a production environment Experience in Agile/Scrum teams and cross-cultural collaboration Tier-1 or Tier-2 product company or equivalent high-performance team experience preferred Minimum Required Qualification Bachelor s or Master s degree in Computer Science, Information Technology, or a related discipline from a Tier-1 or Tier-2 institute. Reporting The individual selected for this role will report to a Technical Project Manager, Engineering Manager, Engineering Director, or another designated leader within the division. About Trimble Dedicated to the world s tomorrow, Trimble is a technology company delivering solutions that enable our customers to work in new ways to measure, build, grow and move goods for a better quality of life. Core technologies in positioning, modeling, connectivity, and data analytics connect the digital and physical worlds to improve productivity, quality, safety, transparency, and sustainability. From purpose-built products and enterprise lifecycle solutions to industry cloud services, Trimble is transforming critical industries such as construction, geospatial, agriculture, and transportation to power an interconnected world of work. For more information, visit: www.trimble.com Trimble s Inclusiveness Commitment We believe in celebrating our differences. That is why our diversity is our strength. To us, that means actively participating in opportunities to be inclusive. Diversity, Equity, and Inclusion have guided our current success while also moving our desire to improve. We actively seek to add members to our community who represent our customers and the places we live and work. We have programs in place to ensure our people are seen, heard, and welcomed and most importantly, that they know they belong, no matter who they are or where they come from.
Posted 1 week ago
6.0 - 11.0 years
20 - 25 Lacs
Chennai
Work from Office
Job Purpose As a Lead Software Development Engineer in Test (SDET) on the Viewpoint team at Trimble , you will lead the test automation strategy, execution, and process optimization for large-scale web and mobile applications. In this role, you will mentor junior SDETs, work closely with development and product teams, and ensure quality through continuous testing and automation best practices. You will be accountable for driving test automation across platforms (web, iOS, Android), defining scalable frameworks, and establishing CI/CD-integrated quality gates. Your contribution will be critical to ensuring smooth, high-quality releases for Trimble Viewpoint s mission-critical enterprise software used in the global construction industry. What You Will Do Define, implement, and evolve the overall test automation strategy for the Viewpoint product suite Build and maintain scalable, reusable test automation frameworks using C# for web and Appium/Selenium for mobile (iOS/Android) Provide technical leadership to the SDET team, including reviewing test architecture, test cases, and automation code Champion quality-first principles across Agile teams and guide integration of testing into all stages of the development lifecycle Set up and manage cloud-based testing infrastructure using Sauce Labs , emulators/simulators, and physical devices Develop test strategies for API, functional, regression, performance, and cross-platform compatibility testing Lead root cause analysis of complex issues in coordination with development and QA teams Drive continuous improvements in test coverage, speed, and reliability across mobile and web Design dashboards and metrics to track test effectiveness, code coverage, and defect trends Collaborate with product managers, architects, and engineering leaders to align quality initiatives with business goals Help integrate test automation into CI/CD pipelines and maintain quality gates for every release Evaluate and recommend new tools, frameworks, and processes to improve automation and testing workflows Mentor junior SDETs and foster a high-performance quality culture within the engineering team What Skills & Experience You Should Have Bachelor s or Master s degree in Computer Science, Information Technology, or a related technical field 6+ years of experience in software testing or SDET roles with at least 2+ years in a lead or senior QA/SDET capacity Advanced proficiency in test automation using C# , including frameworks like MSTest, NUnit, or xUnit Strong hands-on experience with Selenium , Appium , and mobile automation testing for iOS and Android Experience with Sauce Labs or similar device farms/cloud-based testing platforms Expertise in functional, regression, API, and performance testing Solid experience working in Agile teams , participating in sprint planning, estimations, and retrospectives Deep understanding of CI/CD pipelines , including integration of automated tests in build and deployment flows Prior experience with defect tracking systems (JIRA) and test case management tools (e.g., TestRail, Zephyr) Familiarity with testing RESTful services , backend workflows, and microservice architectures Excellent problem-solving skills, with a mindset for root-cause analysis and continuous improvement Strong verbal and written communication skills with the ability to influence stakeholders and drive quality initiatives Viewpoint Engineering Context You will be part of the Trimble Viewpoint team building enterprise software solutions for construction management. Viewpoint s technology stack includes: C#, ASP.NET (Core/Framework), Web API, Angular, OData, and Microsoft SQL Server Integration with Azure Functions, Azure Service Bus, Azure Storage , and Apache Kafka RESTful services, Microservices, and modern frontend technologies Enterprise-grade CI/CD pipelines and Agile workflows You ll work alongside experienced full-stack engineers, product managers, and other QA professionals to deliver production-grade releases at scale. Reporting Structure This position reports to a Technical Project Manager or Engineering Manager within the Viewpoint organization. About Trimble Trimble is a technology company transforming the way the world works by delivering solutions that connect the physical and digital worlds. Core technologies in positioning, modeling, connectivity, and data analytics improve productivity, quality, safety, and sustainability across industries like construction, agriculture, transportation, and geospatial. Visit www.trimble.com to learn more. Trimble s Inclusiveness Commitment We believe in celebrating our differences. Our diversity is our strength. We strive to build an inclusive workplace where everyone belongs and can thrive. Programs and practices at Trimble ensure individuals are seen, heard, welcomed and most importantly valued.
Posted 1 week ago
5.0 - 10.0 years
25 - 30 Lacs
Chennai
Work from Office
As a GCP Data Engineer, you will integrate data from various sources into novel data products. You will build upon existing analytical data, including merging historical data from legacy platforms with data ingested from new platforms. You will also analyze and manipulate large datasets, activating data assets to enable enterprise platforms and analytics within GCP. You will design and implement the transformation and modernization on GCP, creating scalable data pipelines that land data from source applications, integrate into subject areas, and build data marts and products for analytics solutions. You will also conduct deep-dive analysis of Current State Receivables and Originations data in our data warehouse, performing impact analysis related to Ford Credit North Americas modernization and providing implementation solutions. Moreover, you will partner closely with our AI, data science, and product teams, developing creative solutions that build the future for Ford Credit. Experience with large-scale solutions and operationalizing data warehouses, data lakes, and analytics platforms on Google Cloud Platform or other cloud environments is a must. We are looking for candidates with a broad set of analytical and technology skills across these areas and who can demonstrate an ability to design the right solutions with the appropriate combination of GCP and 3rd party technologies for deployment on Google Cloud Platform. GCP certified Professional Data Engineer Successfully designed and implemented data warehouses and ETL processes for over five years, delivering high-quality data solutions. 5+ years of complex SQL development experience 2+ experience with programming languages such as Python, Java, or Apache Beam. Experienced cloud engineer with 3+ years of GCP expertise, specializing in managing cloud infrastructure and applications to production-scale solutions. In-depth understanding of GCP s underlying architecture and hands-on experience of crucial GCP services, especially those related to data processing (Batch/Real Time) leveraging Terraform, Big Query, Dataflow, Pub/Sub, Data form, astronomer, Data Fusion, DataProc, Pyspark, Cloud Composer/Air Flow, Cloud SQL, Compute Engine, Cloud Functions, Cloud Run, Cloud build and App Engine, alongside and storage including Cloud Storage DevOps tools such as Tekton, GitHub, Terraform, Docker. Expert in designing, optimizing, and troubleshooting complex data pipelines. Experience developing and deploying microservices architectures leveraging container orchestration frameworks Experience in designing pipelines and architectures for data processing. Passion and self-motivation to develop/experiment/implement state-of-the-art data engineering methods/techniques. Self-directed, work independently with minimal supervision, and adapts to ambiguous environments. Evidence of a proactive problem-solving mindset and willingness to take the initiative. Strong prioritization, collaboration & coordination skills, and ability to simplify and communicate complex ideas with cross-functional teams and all levels of management. Proven ability to juggle multiple responsibilities and competing demands while maintaining a high level of productivity. Master s degree in computer science, software engineering, information systems, Data Engineering, or a related field. Data engineering or development experience gained in a regulated financial environment. Experience in coaching and mentoring Data Engineers Project management tools like Atlassian JIRA Experience working in an implementation team from concept to operations, providing deep technical subject matter expertise for successful deployment. Experience with data security, governance, and compliance best practices in the cloud. Experience using data science concepts on production datasets to generate insights Design and build production data engineering solutions on Google Cloud Platform (GCP) using services such as BigQuery, Dataflow, DataForm, Astronomer, Data Fusion, DataProc, Cloud Composer/Air Flow, Cloud SQL, Compute Engine, Cloud Functions, Cloud Run, Artifact Registry, GCP APIs, Cloud Build, App Engine, and real-time data streaming platforms like Apache Kafka and GCP Pub/Sub. Design new solutions to better serve AI/ML needs. Lead teams to expand our AI-enabled services. Partner with governance teams to tackle key business needs. Collaborate with stakeholders and cross-functional teams to gather and define data requirements and ensure alignment with business objectives. Partner with analytics teams to understand how value is created using data. Partner with central teams to leverage existing solutions to drive future products. Design and implement batch, real-time streaming, scalable, and fault-tolerant solutions for data ingestion, processing, and storage. Create insights into existing data to fuel the creation of new data products. Perform necessary data mapping, impact analysis for changes, root cause analysis, and data lineage activities, documenting information flows. Implement and champion an enterprise data governance model. Actively promote data protection, sharing, reuse, quality, and standards to ensure data integrity and confidentiality. Develop and maintain documentation for data engineering processes, standards, and best practices. Ensure knowledge transfer and ease of system maintenance. Utilize GCP monitoring and logging tools to proactively identify and address performance bottlenecks and system failures. Provide production support by addressing production issues as per SLAs. Optimize data workflows for performance, reliability, and cost-effectiveness on the GCP infrastructure. Work within an agile product team. Deliver code frequently using Test-Driven Development (TDD), continuous integration, and continuous deployment (CI/CD). Continuously enhance your domain knowledge. Stay current on the latest data engineering practices. Contribute to the companys technical direction while maintaining a customer-centric approach.
Posted 1 week ago
5.0 - 14.0 years
25 - 30 Lacs
Pune
Work from Office
Job Title: Full Stack Developer Job Description: We are seeking a highly skilled and motivated Full Stack Developer to join our dynamic team. The ideal candidate will have a strong background in both front-end and back-end development, with expertise in the following areas: Key Responsibilities: Develop and maintain web applications using the Angular framework (HTML, CSS, etc.) and JavaScript. Implement object-oriented programming principles in both front-end and back-end development. Design and develop Python-based backend services, including working with JSON objects and Python Flask servers. Ensure secure and efficient communication using HTTP and HTTPS protocols, including handling HTTPS requests and responses. Implement and manage self-signed certificates for secure web pages. Design and develop responsive user interfaces for mobile, tablet, and desktop devices, ensuring screen auto-resizing and optimal user experience. Create and manage packages for Angular projects. Develop and integrate REST APIs. Configure and manage Nginx servers for web application deployment. Collaborate with UI/UX designers to translate Figma designs into functional web pages. Utilize tools like Swagger for API documentation and testing. Optimize web pages for improved screen responsiveness and performance. Conduct automation testing of web pages using frameworks such as Robot or any other Framework. Implement webpage tokenization, security, and encryption techniques. Utilize browser-based developer tools for debugging and optimizing web applications on Android and Safari. Work with various servers, including Apache, IIS, and others. Integrate multi-language support into web applications, including languages such as Chinese and Spanish. Ability to work on Windows and Linux based operating system Knowledge of RDMS or any other databases Qualifications: Proven experience as a Full Stack Developer or similar role. Proficiency in Angular framework, JavaScript, and object-oriented programming. Strong knowledge of Python backend development and Flask server. Experience with HTTP/HTTPS protocols and secure communication. Familiarity with self-signed certificates and their implementation. Expertise in responsive UI design for various devices. Experience with RESTful API development and integration. Knowledge of Nginx server configuration and management. Understanding of Figma and ability to translate designs into code. Familiarity with Swagger or similar API documentation tools. Experience with automation testing frameworks. Strong understanding of web security, tokenization, and encryption. Proficiency with browser-based developer tools. Experience with different server technologies (Apache, IIS, etc.). Ability to integrate multi-language support into web applications. Preferred Skills: Strong problem-solving skills and attention to detail. Excellent communication and teamwork abilities. Ability to work in a fast-paced and dynamic environment.
Posted 1 week ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Role: We are seeking a highly skilled and experienced Data Architect with expertise in designing and building data platforms in cloud environments. The ideal candidate will have a strong background in either AWS Data Engineering or Azure Data Engineering, along with proficiency in distributed data processing systems like Spark. Additionally, proficiency in SQL, data modeling, building data warehouses, and knowledge of ingestion tools and data governance are essential for this role. The Data Architect will also need experience with orchestration tools such as Airflow or Dagster and proficiency in Python, with knowledge of Pandas being beneficial. Why Choose Ideas2IT Ideas2IT has all the good attributes of a product startup and a services company. Since we launch our products, you will have ample opportunities to learn and contribute. However, single-product companies stagnate in the technologies they use. In our multiple product initiatives and customer-facing projects, you will have the opportunity to work on various technologies. AGI is going to change the world. Big companies like Microsoft are betting heavily on this (see here and here). We are following suit. What’s in it for you? You will get to work on impactful products instead of back-office applications for the likes of customers like Facebook, Siemens, Roche, and more You will get to work on interesting projects like the Cloud AI platform for personalized cancer treatment Opportunity to continuously learn newer technologies Freedom to bring your ideas to the table and make a difference, instead of being a small cog in a big wheel Showcase your talent in Shark Tanks and Hackathons conducted in the company Here’s what you’ll bring Experience in designing and building data platforms in any cloud. Strong expertise in either AWS Data Engineering or Azure Data Engineering Develop and optimize data processing pipelines using distributed systems like Spark. • Create and maintain data models to support efficient storage and retrieval. Build and optimize data warehouses for analytical and reporting purposes, utilizing technologies such as Postgres, Redshift, Snowflake, etc. Knowledge of ingestion tools such as Apache Kafka, Apache Nifi, AWS Glue, or Azure Data Factory. Establish and enforce data governance policies and procedures to ensure data quality and security. Utilize orchestration tools like Airflow or Dagster to schedule and manage data workflows. Develop scripts and applications in Python to automate tasks and processes. Collaborate with stakeholders to gather requirements and translate them into technical specifications. Communicate technical solutions effectively to clients and stakeholders. Familiarity with multiple cloud ecosystems such as AWS, Azure, and Google Cloud Platform (GCP). Experience with containerization and orchestration technologies like Docker and Kubernetes. Knowledge of machine learning and data science concepts. Experience with data visualization tools such as Tableau or Power BI. Understanding of DevOps principles and practices.
Posted 1 week ago
6.0 - 11.0 years
18 - 33 Lacs
Noida, Pune, Delhi / NCR
Hybrid
Iris Software has been a trusted software engineering partner to several Fortune 500 companies for over three decades. We help clients realize the full potential of technology-enabled transformation by bringing together a unique blend of domain knowledge, best-of-breed technologies, and experience executing essential and critical application development engagements. Tittle - Sr Data Engineer/ Lead Data Engineer Experience - 5-12 years Location - Delhi/NCR, Pune Shift - 12:30- 9:30 pm IST 6+ years of experience in data engineering with a strong focus on AWS services. Proven expertise in: Amazon S3 for scalable data storage AWS Glue for ETL and serverless data integration using Amazon S3, DataSync, EMR, Redshiftfor data warehousing and analytics Proficiency in SQL, Python, or PySpark for data processing. Experience with data modeling, partitioning strategies, and performance optimization. Familiarity with orchestration tools like AWS Step Functions, Apache Airflow, or Glue Workflows. If Intersted, Kindly share your resume on kanika.singh@irissoftware.com Note - Notice Period max 1 month
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39817 Jobs | Dublin
Wipro
19388 Jobs | Bengaluru
Accenture in India
15458 Jobs | Dublin 2
EY
14907 Jobs | London
Uplers
11185 Jobs | Ahmedabad
Amazon
10459 Jobs | Seattle,WA
IBM
9256 Jobs | Armonk
Oracle
9226 Jobs | Redwood City
Accenture services Pvt Ltd
7971 Jobs |
Capgemini
7704 Jobs | Paris,France