Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
4.0 - 7.0 years
3 - 7 Lacs
Noida
Work from Office
We are looking for a skilled Java Developer with 4 to 7 years of experience to design, develop, and implement secure and scalable payment processing applications. The ideal candidate will have excellent written and verbal communication skills and hands-on experience with payment gateways. This position is based in Noida or Bangalore. Roles and Responsibility Design, develop, and implement secure and scalable Java-based payment processing applications. Integrate with payment gateways, banking systems, and third-party APIs for transaction processing. Work with real-time transaction processing systems ensuring high availability and reliability. Develop and maintain microservices-based architectures using Spring Boot, REST APIs, and cloud technologies. Optimize database queries and transaction handling for payment applications. Collaborate with cross-functional teams, including business analysts, QA, and DevOps teams. Job Minimum 4 to 7 years of experience in Java development. Excellent written and verbal communication skills. Hands-on experience with payment gateways. Knowledge of ISO 8583, ISO 20022, SWIFT, ACH, SEPA, UPI, or other payment protocols. Experience with messaging systems such as Kafka, RabbitMQ, JMS, etc. Proficiency in SQL (PostgreSQL, MySQL, Oracle) and NoSQL databases (MongoDB, Redis, etc.).
Posted 1 month ago
5.0 - 8.0 years
5 - 15 Lacs
Hyderabad
Work from Office
Java, J2EE, Spring Boot, Spring Framework,Spring Security, Spring Framework – Spring Boot, Spring MVC, Spring Data, and Spring Security. AWS (Kubernetes, EKS, Kafka, Redis), OAuth2, JWT, MySQL/PostgreSQL, Git (GitHub/GitLab)s. Required Candidate profile RESTful APIs, Microservices, AWS (Kubernetes, EKS, Kafka, Redis), OAuth2, JWT, MySQL/PostgreSQL, Git (GitHub/GitLab) Cloud + Database Proficiency (AWS + SQL/NoSQL)
Posted 1 month ago
8.0 - 13.0 years
2 - 30 Lacs
Bengaluru
Work from Office
A Snapshot of Your Day On a typical day, you will lead the design and implementation of scalable ETL/ELT data pipelines using Python or C#, while managing cloud-based data architectures on platforms like Azure and AWS Youll collaborate with data scientists and analysts to ensure seamless data integration for analysis and reporting, and mentor junior engineers on standard processes Additionally, you will supervise and optimize data pipelines for performance and cost efficiency, while ensuring compliance with data security and governance regulations How Youll Make An Impact For our Onshore Execution Digital Product Development team, we are looking for a highly skilled Data Engineer with 8-10 years of experience to join our team In this role, you will take ownership of designing and implementing data pipelines, optimizing data workflows, and supporting the data infrastructure You will work with large datasets, cloud technologies while ensuring data quality, performance, and scalability Lead the design and implementation of scalable ETL/ELT data pipelines using Python or C# for efficient data processing Architect data solutions for large-scale batch and real-time processing using cloud services (AWS, Azure, Google Cloud) Craft and manage cloud-based data architectures with services like AWS Redshift, Google BigQuery, Azure Data Lake, and Snowflake Implement cloud data solutions using Azure services such as Azure Data Lake, Blob Storage, SQL Database, Synapse Analytics, and Data Factory Develop and automate data workflows for seamless integration into Azure platforms for analysis and reporting Manage and optimize Azure SQL Database, Cosmos DB, and other databases for high availability and performance Supervise and optimize data pipelines for performance and cost efficiency Implement data security and governance practices in compliance with regulations (GDPR, HIPAA) using Azure security features Collaborate with data scientists and analysts to deliver data solutions that meet business analytics needs Mentor junior data engineers on standard processes in data engineering and pipeline design Set up supervising and alerting systems for data pipeline reliability Ensure data accuracy and security through strong governance policies and access controls Maintain documentation for data pipelines and workflows for transparency and onboarding What You Bring 8-10 years of proven experience in data engineering with a focus on large-scale data pipelines and cloud infrastructure Strong expertise in Python (Pandas, NumPy, ETL frameworks) or C# for efficient data processing solutions Extensive experience with cloud platforms (AWS, Azure, Google Cloud) and their data services Sophisticated knowledge of relational (PostgreSQL, MySQL) and NoSQL databases (MongoDB, Cassandra) Familiarity with big data technologies (Apache Spark, Hadoop, Kafka) Strong background in data modeling and ETL/ELT development for large datasets Experience with version control (Git) and CI/CD pipelines for data solution deployment Excellent problem-solving skills for troubleshooting data pipeline issues Experience in optimizing queries and data processing for speed and cost-efficiency Preferred: Experience integrating data pipelines with machine learning or AI models Preferred: Knowledge of Docker, Kubernetes, or containerized services for data workflows Preferred: Familiarity with automation tools (Apache Airflow, Luigi, DBT) for managing data workflow Preferred: Understanding of data privacy regulations (GDPR, HIPAA) and governance practices About The Team Who is Siemens Gamesa Siemens Gamesa is part of Siemens Energy, a global leader in energy technology with a rich legacy of innovation spanning over 150 years Together, we are committed to making sustainable, reliable, and affordable energy a reality by pushing the boundaries of what is possible As a leading player in the wind industry and manufacturer of wind turbines, we are passionate about driving the energy transition and providing innovative solutions that meet the growing energy demand of the global community At Siemens Gamesa, we are always looking for dedicated individuals to join our team and support our focus on energy transformation Our Commitment to Diversity Lucky for us, we are not all the same Through diversity, we generate power We run on inclusion and our combined creative energy is fueled by over 130 nationalities Siemens Energy celebrates character no matter what ethnic background, gender, age, religion, identity, or disability We energize society, all of society, and we do not discriminate based on our differences Rewards/Benefits All employees are automatically covered under the Medical Insurance Company paid considerable Family floater cover covering employee, spouse and 2 dependent children up to 25 years of age Siemens Gamesa provides an option to opt for Meal Card to all its employees which will be as per the terms and conditions prescribed in the company policy as a part of CTC, tax saving measure We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform crucial job functions, and to receive other benefits and privileges of employment Please contact us to request accommodation
Posted 1 month ago
3.0 - 7.0 years
6 - 10 Lacs
Hyderabad
Work from Office
Job Summary Google Cloud Platform (GCP) Services Hands-on experience with GCP Cloud Storage, Cloud Composer, and Dataflow ETL and Data Pipeline Development Strong knowledge of building end-to-end data pipelines, including data ingestion, transformation, and broadcasting across heterogeneous data systems Database TechnologiesStrong in Complex SQL Proficiency in SQL and NoSQL databases (e g , MongoDB, PostgreSQL), with the ability to write complex queries and optimize performance Programming Skills (Python, Java, or Scala) Proficient in at least one programming language for developing scalable data solutions and automation scripts CI/CD and DevOps Tools Leveraging tools like GitHub, CircleCI, and Harness to automate deployment workflows and manage data pipeline releases efficiently Application Deployment & Observability Experience in production deployment, issue triage, and use of observability tools and best practices
Posted 1 month ago
4.0 - 8.0 years
10 - 14 Lacs
Gandhinagar
Work from Office
Responsibilities: * Collaborate with cross-functional teams on backend development projects. * Design REST APIs using Node.js framework and Express.js. * Develop scalable backends with MongoDB, NestJS, Redis, Socket.IO. Health insurance Provident fund
Posted 1 month ago
10.0 - 18.0 years
25 - 35 Lacs
Gurugram
Work from Office
Key Responsibilities: Contribute to the development of our backend services using JavaScript with Node.js and Express.js, alongside MySQL/MariaDB databases. Advance our accountant and client portal frontends using TypeScript, Modern Angular, React, Bootstrap, HTML5, and SCSS. Enhance our system management back office, leveraging Python, Django, Alpine.js, PostgreSQL, and Redis. Develop and maintain microservices, and plan new ones using suggested technologies like Typescript, Node.js, MongoDB, and Golang. Assist with the migration of services to Kubernetes, optimizing deployment on Debian Linux, Docker, and Systemd. Maintain high-quality code and automated workflows using Git and Gitlab CI/CD. Actively participate in agile development processes and collaborate on project management using JIRA. Qualifications: Extensive full-stack development experience, especially with JavaScript, TypeScript, Node.js, and Python. Strong knowledge of both SQL and NoSQL databases. Expertise in frontend frameworks like Angular and React. Experience with containerization and orchestration tools like Docker and Kubernetes. Proficient in version control systems and CI/CD tools, particularly Git and Gitlab. Excellent collaborative and problem-solving skills. Keen ability to work alongside leadership to drive the development forward.
Posted 1 month ago
6.0 - 8.0 years
30 - 32 Lacs
Bengaluru
Work from Office
We are seeking an experienced ER Modeling Expert / Data Modeler to design, develop, and maintain conceptual, logical, and physical data models for enterprise applications and data warehouses. The ideal candidate should have a deep understanding of relational databases, normalization, data governance, and schema design while ensuring data integrity and scalability. Key Responsibilities: Design and develop Entity-Relationship (ER) models for databases, data warehouses, and data lakes. Create conceptual, logical, and physical data models using tools like Erwin, Visio, Lucidchart, or PowerDesigner. Define primary keys, foreign keys, relationships, cardinality, and constraints for optimal data integrity. Work closely with DBAs, data architects, and software developers to implement data models. Optimize database performance, indexing, and query tuning for relational databases. Define and enforce data governance, data quality, and master data management (MDM) standards. Develop and maintain metadata repositories, data dictionaries, and schema documentation. Ensure compliance with data security and privacy regulations (GDPR, HIPAA, etc.). Support ETL/ELT pipeline design to ensure smooth data flow between systems. Work with big data platforms (Snowflake, Databricks, Redshift, BigQuery, or Synapse) to support modern data architectures. Required Skills & Qualifications: 6+ years of experience in data modeling, database design, and ER modeling. Strong expertise in relational databases (SQL Server, Oracle, PostgreSQL, MySQL, etc.). Hands-on experience with data modeling tools (Erwin, PowerDesigner, DB Designer, Visio, or Lucidchart). Proficiency in SQL, indexing strategies, query performance tuning, and stored procedures. Deep understanding of normalization, denormalization, star schema, and snowflake schema. Experience with data governance, data quality, and metadata management. Strong knowledge of ETL processes, data pipelines, and data warehousing concepts. Familiarity with NoSQL databases (MongoDB, Cassandra, DynamoDB) and their modeling approaches. Ability to collaborate with cross-functional teams including data engineers, architects, and business analysts. Strong documentation and communication skills. Preferred Qualifications: Certifications in Data Management, Data Architecture, or Cloud Databases. Experience with cloud-based databases (AWS RDS, Azure SQL, Google Cloud Spanner, Snowflake). Knowledge of Graph Databases (Neo4j, Amazon Neptune) and hierarchical modeling.
Posted 1 month ago
8.0 - 10.0 years
10 - 12 Lacs
Bengaluru
Work from Office
Senior Data Engineer (Databricks, PySpark, SQL, Cloud Data Platforms, Data Pipelines) Job Summary Synechron is seeking a highly skilled and experienced Data Engineer to join our innovative analytics team in Bangalore. The primary purpose of this role is to design, develop, and maintain scalable data pipelines and architectures that empower data-driven decision making and advanced analytics initiatives. As a critical contributor within our data ecosystem, you will enable the organization to harness large, complex datasets efficiently, supporting strategic business objectives and ensuring high standards of data quality, security, and performance. Your expertise will directly contribute to building robust, efficient, and secure data solutions that drive business value across multiple domains. Software Required Software & Tools: Databricks Platform (Hands-on experience with Databricks notebooks, clusters, and workflows) PySpark (Proficient in developing and optimizing Spark jobs) SQL (Advance proficiency in writing complex SQL queries and optimizing queries) Data Orchestration Tools such as Apache Airflow or similar (Experience in scheduling and managing data workflows) Cloud Data Platforms (Experience with cloud environments such as AWS, Azure, or Google Cloud) Data Warehousing Solutions (Snowflake highly preferred) Preferred Software & Tools: Kafka or other streaming frameworks (e.g., Confluent, MQTT) CI/CD tools for data pipelines (e.g., Jenkins, GitLab CI) DevOps practices for data workflows Programming LanguagesPython (Expert level), and familiarity with other languages like Java or Scala is advantageous Overall Responsibilities Architect, develop, and maintain scalable, resilient data pipelines and architectures supporting business analytics, reporting, and data science use cases. Collaborate closely with data scientists, analysts, and cross-functional teams to gather requirements and deliver optimized data solutions aligned with organizational goals. Ensure data quality, consistency, and security across all data workflows, adhering to best practices and compliance standards. Optimize data processes for enhanced performance, reliability, and cost efficiency. Integrate data from multiple sources, including cloud data services and streaming platforms, ensuring seamless data flow and transformation. Lead efforts in performance tuning and troubleshooting data pipelines to resolve bottlenecks and improve throughput. Stay up-to-date with emerging data engineering technologies and contribute to continuous improvement initiatives within the team. Technical Skills (By Category) Programming Languages: EssentialPython, SQL PreferredScala, Java Databases/Data Management: EssentialData modeling, ETL/ELT processes, data warehousing (Snowflake experience highly preferred) Preferred NoSQL databases, Hadoop ecosystem Cloud Technologies: EssentialExperience with cloud data services (AWS, Azure, GCP) and deployment of data pipelines in cloud environments PreferredCloud native data tools and architecture design Frameworks and Libraries: EssentialPySpark, Spark SQL, Kafka, Airflow PreferredStreaming frameworks, TensorFlow (for data prep) Development Tools and Methodologies: EssentialVersion control (Git), CI/CD pipelines, Agile methodologies PreferredDevOps practices in data engineering, containerization (Docker, Kubernetes) Security Protocols: Familiarity with data security, encryption standards, and compliance best practices Experience Minimum of 8 years of professional experience in Data Engineering or related roles Proven track record of designing and deploying large-scale data pipelines using Databricks, PySpark, and SQL Practical experience in data modeling, data warehousing, and ETL/ELT workflows Experience working with cloud data platforms and streaming data frameworks such as Kafka or equivalent Demonstrated ability to work with cross-functional teams, translating business needs into technical solutions Experience with data orchestration and automation tools is highly valued Prior experience in implementing CI/CD pipelines or DevOps practices for data workflows (preferred) Day-to-Day Activities Design, develop, and troubleshoot data pipelines for ingestion, transformation, and storage of large datasets Collaborate with data scientists and analysts to understand data requirements and optimize existing pipelines Automate data workflows and improve pipeline efficiency through performance tuning and best practices Conduct data quality audits and ensure data security protocols are followed Manage and monitor data workflows, troubleshoot failures, and implement fixes proactively Contribute to documentation, code reviews, and knowledge sharing within the team Stay informed of evolving data engineering tools, techniques, and industry best practices, incorporating them into daily work processes Qualifications Bachelor's or Master's degree in Computer Science, Information Technology, or related field Relevant certifications such as Databricks Certified Data Engineer, AWS Certified Data Analytics, or equivalent (preferred) Continuous learning through courses, workshops, or industry conferences on data engineering and cloud technologies Professional Competencies Strong analytical and problem-solving skills with a focus on scalable solutions Excellent communication skills to effectively collaborate with technical and non-technical stakeholders Ability to prioritize tasks, manage time effectively, and deliver within tight deadlines Demonstrated leadership in guiding team members and driving project success Adaptability to evolving technological landscapes and innovative thinking Commitment to data privacy, security, and ethical handling of information
Posted 1 month ago
4.0 - 6.0 years
6 - 10 Lacs
Bengaluru
Work from Office
Project Role : Application Automation Engineer Project Role Description : Apply innovative ideas to drive the automation of Delivery Analytics at the client level. Must have skills : Connected Vehicles Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :This is a hands-on, technical role where the candidate will design and implementation of TCU-related features for connected vehicles Roles & Responsibilities:1.Lead the design and integration of TCU-related features for connected vehicles.2.Collaborate on the implementation of vehicle communication with cloud (preferably AWS) services.3.Drive the development of provisioning systems for new and existing vehicles, focusing on user-friendly self-enrolment, dealer enrolment, and subscription models. Work to simplify and automate the process for end users and dealers.4.Integrate systems with third-party APIs and services for data exchange.5.Build and maintain microservices using Spring Boot, ensuring scalability and performance.6.Ensure secure vehicle data communication, adhering to industry standards.7.Work with cross-functional teams to deliver seamless solutions for connected vehicles.8.Monitor and optimize system performance, ensuring low latency, high throughput, and minimal downtime for critical vehicle services. Professional & Technical Skills: 1.Minimum 4-6 years of experience is required.2.Expertise in telematics, vehicle provisioning, enrolment models, and connected vehicle ecosystems. Experience in managing telematics control units (TCU) and understanding of how data flows between vehicles and cloud systems.3.Hands-on experience in designing, implementing, and managing cloud architectures and solutions (preferably on AWS) for connected vehicles.4.Extensive experience with Java, Spring Boot, and microservices architecture. Proficiency in creating and maintaining microservices-based applications, ensuring scalability, security, and performance.5.Experience in integrating backend systems with third-party services through APIs, including RESTful APIs and WebSocket communication.6.Strong experience with SQL/NoSQL databases (e.g., MySQL, MongoDB, Cassandra) and data integration strategies for large-scale systems.7.Experience with CI/CD pipelines, Docker, and Kubernetes.8.Knowledge of telematics protocols (e.g., MQTT, CAN bus).9.Experience with vehicle lifecycle management and security standards (ISO 26262, SAE J3061).10.Previous experience working in the automotive industry or with OEMs (Original Equipment Manufacturers) is a plus. Additional Information:1.The candidate should have a minimum of 4 years of experience in cloud (preferably AWS) services.2.This position is based at our Bengaluru office.3.A 15 years full time education is required (bachelors degree in computer science, Engineering, or a related field). Qualification 15 years full time education
Posted 1 month ago
6.0 - 8.0 years
8 - 13 Lacs
Bengaluru
Work from Office
6+ years experience in Agile software development Advanced knowledge of Spring, Hibernate, Spring Data JPA, and Java 8 features. Advanced knowledge of relational databases and NoSQL databases Experience with creating and consuming REST and SOAP APIs Experience with Microservices and container technologies. Experience with Build and Deployment technologies. Ability to work independently and with a team Excellent organizational and leadership skills
Posted 1 month ago
7.0 - 12.0 years
9 - 14 Lacs
Coimbatore
Work from Office
Project Role : Software Development Engineer Project Role Description : Analyze, design, code and test multiple components of application code across one or more clients. Perform maintenance, enhancements and/or development work. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NAMinimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Software Development Engineer, you will analyze, design, code, and test multiple components of application code across one or more clients. You will perform maintenance, enhancements, and/or development work in a dynamic environment. Roles & Responsibilities:- Expected to be an SME- Collaborate and manage the team to perform- Responsible for team decisions- Engage with multiple teams and contribute on key decisions- Provide solutions to problems for their immediate team and across multiple teams- Lead the team in implementing innovative solutions- Conduct code reviews and ensure code quality- Mentor junior team members for their professional growth Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform- Strong understanding of cloud-based data architecture- Experience with big data technologies such as Hadoop and Spark- Hands-on experience in developing scalable data pipelines- Proficient in SQL and NoSQL databases Additional Information:- The candidate should have a minimum of 7.5 years of experience in Databricks Unified Data Analytics Platform- This position is based at our Coimbatore office- A 15 years full-time education is required Qualification 15 years full time education
Posted 1 month ago
5.0 - 8.0 years
8 - 13 Lacs
Noida
Work from Office
Design, develop, and implement Java-based applications with a focus on database interactions. Create and optimize database schemas, queries, and stored procedures. Collaborate with cross-functional teams to define and understand application requirements. Ensure data integrity and security through effective database management practices. Troubleshoot and resolve database-related issues and performance bottlenecks. Conduct code reviews and provide constructive feedback to peers. Stay updated with industry trends and best practices in Java development and database management. Qualifications: 5-8 years of experience in Java development with a focus on database integration. Proficiency in Java programming language and related frameworks (e.g., Spring, Hibernate). Strong experience with relational databases such as MySQL, PostgreSQL, or Oracle. Knowledge of SQL and database design principles. Experience with database performance tuning and optimization. Familiarity with version control systems (e.g., Git). Excellent problem-solving skills and attention to detail. Strong communication and teamwork abilities. Preferred Skills: Experience with NoSQL databases (e.g., MongoDB, Cassandra) is a plus. Knowledge of cloud-based database solutions (e.g., AWS RDS, Azure SQL). Familiarity with Agile/Scrum methodologies. Mandatory Competencies Fundamental Technical Skills - Java Java - SQL Database - PostgreSQL Database - MySQL Beh - Communication and collaboration Architecture - Micro Service Fundamental Technical Skills - Spring Framework/Hibernate/Junit etc. Fundamental Technical Skills - Multithreading Java Others - Kafka
Posted 1 month ago
7.0 - 12.0 years
4 - 9 Lacs
Bengaluru
Work from Office
Degree, Post graduate in Computer Science or related field (or equivalent industry experience) Minimum 4+ years of development and design experience in Informatica Big Data Management Extensive knowledge on Oozie scheduling, HQL, Hive, HDFS (including usage of storage controllers) and data partitioning Extensive experience working with SQL and NoSQL databases Linux OS configuration and use, including shell scripting. Good hands on experience with design patterns and their implementation. Well versed with Agile, DevOps and CI/CD principles (GitHub, Jenkins etc.), and actively involved in solving, troubleshooting issues in distributed services ecosystem Familiar with Distributed services resiliency and monitoring in a production environment. Experience in designing, building, testing and implementing security systems including identifying security design gaps in existing and proposed architectures and recommend changes or enhancements. Responsible for adhering to established policies, following best practices, developing and possessing an in-depth understanding of exploits and vulnerabilities, resolving issues by taking the appropriate corrective action. Knowledge on security controls designing Source and Data Transfers including CRON, ETLs, and JDBC ODBC scripts. Understand basics of Networking including DNS, Proxy, ACL, Policy and troubleshooting High level knowledge of compliance and regulatory requirements of data including but not limited to encryption, anonymization, data integrity, policy control features in large scale infrastructures Understand data sensitivity in terms of logging, events and in memory data storage such as no card numbers or personally identifiable data in logs. Implements wrapper solutions for new/existing components with no/minimal security controls to ensure compliance to bank standards. Experience in Agile methodology. Ensure quality of technical and application architecture and design of systems across the organization. Effectively research and benchmark technology against other best in class technologies. Experience in Banking, Financial and Fintech experience in an enterprise environment preferred Able to influence multiple teams on technical considerations, increasing their productivity and effectiveness, by sharing deep knowledge and experience. Self-motivator and self-starter, Ability to own and drive things without supervision and works collaboratively with the teams across the organization. Have excellent soft and interpersonal skills to interact and present the ideas to team. The engineer shouldve good listening skills and speaks clearly in front of team, stakeholders and management. The engineer should always carry positive attitude towards work and establishes effective team relations and builds a climate of trust within the team. Should be enthusiastic and passionate and creates a motivating environment for the team.
Posted 1 month ago
4.0 - 9.0 years
8 - 13 Lacs
Hyderabad
Work from Office
We are looking for a PySpark solutions developer and data engineer who can design and build solutions for one of our Fortune 500 Client programs, which aims towards building a data standardized and curation needs on Hadoop cluster. This is high visibility, fast-paced key initiative will integrate data across internal and external sources, provide analytical insights, and integrate with the customers critical systems. Key Responsibilities Ability to design, build and unit test applications on Spark framework on Python. Build PySpark based applications for both batch and streaming requirements, which will require in-depth knowledge on majority of Hadoop and NoSQL databases as well. Develop and execute data pipeline testing processes and validate business rules and policies. Optimize performance of the built Spark applications in Hadoop using configurations around Spark Context, Spark-SQL, Data Frame, and Pair RDD's. Optimize performance for data access requirements by choosing the appropriate native Hadoop file formats (Avro, Parquet, ORC etc) and compression codec respectively. Build integrated solutions leveraging Unix shell scripting, RDBMS, Hive, HDFS File System, HDFS File Types, HDFS compression codec. Build data tokenization libraries and integrate with Hive & Spark for column-level obfuscation. Experience in processing large amounts of structured and unstructured data, including integrating data from multiple sources. Create and maintain integration and regression testing framework on Jenkins integrated with Bit Bucket and/or GIT repositories. Participate in the agile development process, and document and communicate issues and bugs relative to data standards in scrum meetings. Work collaboratively with onsite and offshore team. Develop & review technical documentation for artifacts delivered. Ability to solve complex data-driven scenarios and triage towards defects and production issues. Ability to learn-unlearn-relearn concepts with an open and analytical mindset. Participate in code release and production deployment. Challenge and inspire team members to achieve business results in a fast paced and quickly changing environment. Preferred Qualifications BE/B.Tech/ B.Sc. in Computer Science/ Statistics from an accredited college or university. Minimum 3 years of extensive experience in design, build and deployment of PySpark-based applications. Expertise in handling complex large-scale Big Data environments preferably (20Tb+). Minimum 3 years of experience in the followingHIVE, YARN, HDFS. Hands-on experience writing complex SQL queries, exporting, and importing large amounts of data using utilities. Ability to build abstracted, modularized reusable code components. Prior experience on ETL tools preferably Informatica PowerCenter is advantageous. Able to quickly adapt and learn. Able to jump into an ambiguous situation and take the lead on resolution. Able to communicate and coordinate across various teams. Are comfortable tackling new challenges and new ways of working Are ready to move from traditional methods and adapt into agile ones Comfortable challenging your peers and leadership team. Can prove yourself quickly and decisively. Excellent communication skills and Good Customer Centricity. Strong Target & High Solution Orientation.
Posted 1 month ago
9.0 - 14.0 years
5 - 8 Lacs
Bengaluru
Work from Office
Kafka Data Engineer Data Engineer to build and manage data pipelines that support batch and streaming data solutions. The role requires expertise in creating seamless data flows across platforms like Data Lake/Lakehouse in Cloudera, Azure Databricks, Kafka for both batch and stream data pipelines etc. Responsibilities Strong experience in develop, test, and maintain data pipelines (batch & stream) using Cloudera, Spark, Kafka and Azure services like ADF, Cosmos DB, Databricks, NoSQL DB/ Mongo DB etc. Strong programming skills in spark, python or scala & SQL. Optimize data pipelines to improve speed, performance, and reliability, ensuring that data is available for data consumers as required. Create ETL pipelines for downstream consumers by transform data as per business logic. Work closely with Data Architects and Data Analysts to align data solutions with business needs and ensure the accuracy and accessibility of data. Implement data validation checks and error handling processes to maintain high data quality and consistency across data pipelines. Strong analytical and problem solving skills, with a focus on optimizing data flows and addressing impacts in the data pipeline. Qualifications 8+ years of IT experience with at least 5+ years in data engineering and cloud-based data platforms. Strong experience with Cloudera/any Data Lake, Confluent/Apache Kafka, and Azure Data Services (ADF, Databricks, Cosmos DB). Deep knowledge of NoSQL databases (Cosmos DB, MongoDB) and data modeling for performance and scalability. Proven expertise in designing and implementing batch and streaming data pipelines using Databricks, Spark, or Kafka. Experience in creating scalable, reliable, and high-performance data solutions with robust data governance policies. Strong collaboration skills to work with stakeholders, mentor junior Data Engineers, and translate business needs into actionable solutions. Bachelors or masters degree in computer science, IT, or a related field.
Posted 1 month ago
7.0 - 12.0 years
5 - 9 Lacs
Hyderabad
Work from Office
4 years of hands-on experience in JAVA FULL STACK - JAVA SPRING BOOT Design, develop, test, and deploy scalable and resilient microservices using Java and Spring Boot. Collaborate with cross-functional teams to define, design, and ship new features. Work on the entire software development lifecycle, from concept and design to testing and deployment. Implement and maintain AWS cloud-based solutions, ensuring high performance, security, and scalability. Integrate microservices with Kafka for real-time data streaming and event-driven architecture. Troubleshoot and resolve issues in a timely manner, ensuring optimal system performance. Keep up-to-date with industry trends and advancements, incorporating best practices into our development processes. Should Be a Java Full Stack Developer. Bachelor's or Master's degree in Computer Science or related field. Proficiency in Spring Boot and other Spring Framework components. Extensive experience in designing and developing RESTful APIs. Solid understanding of AWS services, including but not limited to EC2, Lambda, S3, and RDS. Experience with Kafka for building event-driven architectures. Strong database skills, including SQL and NoSQL databases. Familiarity with containerization and orchestration tools (Docker, Kubernetes). Excellent problem-solving and troubleshooting skills. Strong communication and collaboration skills.
Posted 1 month ago
7.0 - 12.0 years
4 - 8 Lacs
Hyderabad
Work from Office
RolePython Django developer Experience4 to 7 years Work ModeHybrid Work Timings1:30 pm IST to 10:30 pm IST LocationChennai & Hyderabad Primary Skills: Python, Communication, Django, AWS services. JD: Experience with Django REST Framework (DRF). Knowledge of SQL and NoSQL databases (e.g., PostgreSQL, MongoDB). Familiarity with Docker and containerization Experience with automated testing frameworks Familiarity with CI/CD tools and pipelines. Assist in developing, testing, and maintaining web applications using Django and Python that are deployed on AWS cloud platform. Take KT from application team on the code and understand BMO coding standards Work on Application vulnerability fixes based on Vulnerability report from BMO team Work with RESTful APIs and integrate third-party services. Develop and manage database models, migrations, and queries using Django ORM. Collaborate with front-end developers to integrate user-facing elements with server-side logic. Write clean, efficient, and reusable code. Participate in code reviews and follow best practices for software development. Troubleshoot, debug, and optimize applications for performance and security. Write unit and integration tests to ensure code reliability. Contribute to deployment processes and continuous integration pipelines. Stay updated with the latest trends and best practices in Django and web development. Responsibility:Assist in developing, testing, and maintaining web applications using Django and Python that are deployed on AWS cloud platform. Take KT from application team on the code and understand BMO coding standards Work on Application vulnerability fixes based on Vulnerability report from BMO team.
Posted 1 month ago
7.0 - 12.0 years
7 - 11 Lacs
Hyderabad
Work from Office
We are seeking an experienced Senior Python Developer to join our dynamic engineering team. The ideal candidate will have strong expertise in Python development, software architecture, and best practices for scalable, maintainable, and high-performance applications. You will lead technical design and development efforts, mentor junior developers, and collaborate closely with cross-functional teams to deliver innovative solutions. Key Responsibilities: Design, develop, and maintain scalable and efficient Python applications and services. Lead architecture discussions and provide technical guidance to the development team. Write clean, reusable, and well-documented code following best practices. Collaborate with product managers, designers, and other engineers to define requirements and deliver features. Implement automated testing, CI/CD pipelines, and deployment processes. Troubleshoot and debug complex software issues. Mentor and support junior developers and conduct code reviews. Stay updated with the latest industry trends and technologies to continuously improve development processes. Ensure code quality, security, and performance optimization. Required Qualifications: 5+ years of professional experience developing applications in Python. Strong knowledge of Python frameworks such as Django, Flask, or FastAPI. Experience with RESTful API design and development. Solid understanding of object-oriented programming and software design patterns. Proficient in working with relational and NoSQL databases (e.g., PostgreSQL, MySQL, MongoDB). Experience with version control systems like Git. Familiarity with containerization tools such as Docker and orchestration with Kubernetes is a plus. Strong understanding of CI/CD tools and automated testing frameworks. Excellent problem-solving, debugging, and analytical skills. Strong communication skills and ability to work in a collaborative environment.
Posted 1 month ago
4.0 - 9.0 years
4 - 8 Lacs
Gurugram
Work from Office
Data Engineer Location PAN INDIA Workmode Hybrid Work Timing :2 Pm to 11 PM Primary Skill Data Engineer Experience in data engineering, with a proven focus on data ingestion and extraction using Python/PySpark.. Extensive AWS experience is mandatory, with proficiency in Glue, Lambda, SQS, SNS, AWS IAM, AWS Step Functions, S3, and RDS (Oracle, Aurora Postgres). 4+ years of experience working with both relational and non-relational/NoSQL databases is required. Strong SQL experience is necessary, demonstrating the ability to write complex queries from scratch. Also, experience in Redshift is required along with other SQL DB experience Strong scripting experience with the ability to build intricate data pipelines using AWS serverless architecture. understanding of building an end-to end Data pipeline. Strong understanding of Kinesis, Kafka, CDK. Experience with Kafka and ECS is also required. strong understanding of data concepts related to data warehousing, business intelligence (BI), data security, data quality, and data profiling is required Experience in Node Js and CDK. JDResponsibilities Lead the architectural design and development of a scalable, reliable, and flexible metadata-driven data ingestion and extraction framework on AWS using Python/PySpark. Design and implement a customizable data processing framework using Python/PySpark. This framework should be capable of handling diverse scenarios and evolving data processing requirements. Implement data pipeline for data Ingestion, transformation and extraction leveraging the AWS Cloud Services Seamlessly integrate a variety of AWS services, including S3,Glue, Kafka, Lambda, SQL, SNS, Athena, EC2, RDS (Oracle, Postgres, MySQL), AWS Crawler to construct a highly scalable and reliable data ingestion and extraction pipeline. Facilitate configuration and extensibility of the framework to adapt to evolving data needs and processing scenarios. Develop and maintain rigorous data quality checks and validation processes to safeguard the integrity of ingested data. Implement robust error handling, logging, monitoring, and alerting mechanisms to ensure the reliability of the entire data pipeline. QualificationsMust Have Over 6 years of hands-on experience in data engineering, with a proven focus on data ingestion and extraction using Python/PySpark. Extensive AWS experience is mandatory, with proficiency in Glue, Lambda, SQS, SNS, AWS IAM, AWS Step Functions, S3, and RDS (Oracle, Aurora Postgres). 4+ years of experience working with both relational and non-relational/NoSQL databases is required. Strong SQL experience is necessary, demonstrating the ability to write complex queries from scratch. Strong working experience in Redshift is required along with other SQL DB experience. Strong scripting experience with the ability to build intricate data pipelines using AWS serverless architecture. Complete understanding of building an end-to end Data pipeline. Nice to have Strong understanding of Kinesis, Kafka, CDK. A strong understanding of data concepts related to data warehousing, business intelligence (BI), data security, data quality, and data profiling is required. Experience in Node Js and CDK. Experience with Kafka and ECS is also required.
Posted 1 month ago
5.0 - 10.0 years
10 - 14 Lacs
Chennai
Work from Office
As an Associate Software Developer at IBM, you'll work with clients to co-createsolutions to major real-world challenges by using best practice technologies, tools,techniques, and products to translate system requirements into the design anddevelopment of customized systems Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise 5+ years (mid level) 8+ years(senior) experience developing large scale web applications using GoLang ,Significant Microservices architecture and development skills Significant knowledge of SQL Databases and NoSQL Databases ( example SQL Server, Postgres, Cassandra, Yugabyte ) Preferred experience in Messaging architectures & development- Kafka or equivalent, Experience with Agile methodologies and principles Proven Experience working with Docker or similar technologies, GIT, CI/CD and writing unit/integration tests Strong analytical problem-solving skills and excellent written and verbal communication skills, Knowledge in using Monitoring tools and configuring dashboards like Grafana or datadog etc, Preferably Azure cloud working experience or any other cloud working knowledge Preferred technical and professional experience None
Posted 1 month ago
5.0 - 10.0 years
9 - 12 Lacs
Hyderabad
Work from Office
Greetings from IDESLABS, We are looking for the Cloud Database Engineer for contract and FTE roles Job details Skill: Cloud Database Engineer Experience: 5+ Location: Pan india Job Description and please share the profiles at Primary skills: Hana, Hadoop, Redis, Kafka, InfluxDB, Postgres, Cassandra, and MySQL Terraform and Ansible Python Cloud Database Engineer Primary Responsibilities The Cloud Database Engineer provides 24x7 operational support and designs and creates automation for the Hana, Hadoop, Redis, Kafka, InfluxDB, Postgres, Cassandra, and MySQL instances supporting the SAP Procurement and Business Network cloud platform in public cloud, SAP Business Technology Platform, and SAP Managed Data Centers Data Dynamo: Design, deploy, and maintain highly available, scalable, and secure cloud databases to support real-time data processing and fuel data-driven insights. Support Superhero: Provide 24x7 support, monitoring, and proactive maintenance for our critical databases, ensuring uptime and availability around the clock. Automation Alchemist: Leverage Terraform and Ansible to automate database provisioning, configuration, and scaling, reducing manual effort and boosting efficiency. Python Wizardry: Harness the power of Python to develop custom scripts, automation tools, and data pipelines that streamline database operations and enhance performance. Git Guardian: Collaborate with development teams to version control database changes effectively using Git, ensuring seamless integration and traceability. Performance Maestro: Dive deep into data performance analysis, fine-tuning queries, optimizing database configurations, and collaborating with developers to achieve peak efficiency. Security Sentinel: Implement robust access controls, encryption, and monitoring to safeguard our data assets, maintaining the highest levels of data security. Collaboration Virtuoso: Work hand-in-hand with cross-functional teams, offering database expertise, and driving successful integration with applications and analytics platforms. Qualifications Database Mastery: Extensive experience managing databases, including SAP HANA, Hadoop, Redis, Cassandra, MySQL, and InfluxDB, with a strong understanding of relational and NoSQL databases. Cloud Expertise: Proven proficiency in cloud database management on major platforms (AWS, Azure, or GCP), with the ability to optimize database performance in a cloud environment. Automation Wizardry: Hands-on experience with infrastructure automation tools like Terraform and configuration management tools like Ansible, coupled with Python proficiency for automation, and git for source code management. Innovation Mindset: A track record of innovation, staying abreast of industry trends, and a passion for exploring and integrating new technologies. Problem-Solving Ninja: Demonstrated ability to identify complex technical issues, devise creative solutions, and implement effective problem-solving strategies. Collaborative Spirit: Excellent communication skills and the ability to work collaboratively with cross-functional teams in an agile environment. Bachelor's Degree: A Bachelor's degree in Computer Science, Engineering, or a related field. Advanced degrees are a plus. 3+ years experience in database administration
Posted 1 month ago
7.0 - 12.0 years
6 - 10 Lacs
Noida, Bengaluru
Work from Office
About the Role: Grade Level (for internal use): 10 Responsibilities: To work closely with various stakeholders to collect, clean, model and visualise datasets. To create data driven insights by researching, designing and implementing ML models to deliver insights and implement action-oriented solutions to complex business problems To drive ground-breaking ML technology within the Modelling and Data Science team. To extract hidden value insights and enrich accuracy of the datasets. To leverage technology and automate workflows creating modernized operational processes aligning with the team strategy. To understand, implement, manage, and maintain analytical solutions & techniques independently. To collaborate and coordinate with Data, content and modelling teams and provide analytical assistance of various commodity datasets To drive and maintain high quality processes and delivering projects in collaborative Agile team environments. : 7+ years of programming experience particularly in Python 4+ years of experience working with SQL or NoSQL databases. 1+ years of experience working with Pyspark. University degree in Computer Science, Engineering, Mathematics, or related disciplines. Strong understanding of big data technologies such as Hadoop, Spark, or Kafka. Demonstrated ability to design and implement end-to-end scalable and performant data pipelines. Experience with workflow management platforms like Airflow. Strong analytical and problem-solving skills. Ability to collaborate and communicate effectively with both technical and non-technical stakeholders. Experience building solutions and working in the Agile working environment Experience working with git or other source control tools Strong understanding of Object-Oriented Programming (OOP) principles and design patterns. Knowledge of clean code practices and the ability to write well-documented, modular, and reusable code. Strong focus on performance optimization and writing efficient, scalable code. Nice to have: Experience working with Oil, gas and energy markets Experience working with BI Visualization applications (e.g. Tableau, Power BI) Understanding of cloud-based services, preferably AWS Experience working with Unified analytics platforms like Databricks Experience with deep learning and related toolkitsTensorflow, PyTorch, Keras, etc. About S&P Global Commodity Insights At S&P Global Commodity Insights, our complete view of global energy and commodities markets enables our customers to make decisions with conviction and create long-term, sustainable value. Were a trusted connector that brings together thought leaders, market participants, governments, and regulators to co-create solutions that lead to progress. Vital to navigating Energy Transition, S&P Global Commodity Insights coverage includes oil and gas, power, chemicals, metals, agriculture and shipping. S&P Global Commodity Insights is a division of S&P Global (NYSESPGI). S&P Global is the worlds foremost provider of credit ratings, benchmarks, analytics and workflow solutions in the global capital, commodity and automotive markets. With every one of our offerings, we help many of the worlds leading organizations navigate the economic landscape so they can plan for tomorrow, today.For more information, visit http://www.spglobal.com/commodity-insights . Whats In It For You Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technologythe right combination can unlock possibility and change the world.Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you cantake care of business. We care about our people. Thats why we provide everything youand your careerneed to thrive at S&P Global. Health & WellnessHealth care coverage designed for the mind and body. Continuous LearningAccess a wealth of resources to grow your career and learn valuable new skills. Invest in Your FutureSecure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly PerksIts not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the BasicsFrom retail discounts to referral incentive awardssmall perks can make a big difference. For more information on benefits by country visithttps://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected andengaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com. S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, pre-employment training or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here. ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ----------------------------------------------------------- IFTECH202.1 - Middle Professional Tier I (EEO Job Group) Location - Bengaluru,Noida,Uttarpradesh,Hyderabad
Posted 1 month ago
4.0 - 7.0 years
6 - 9 Lacs
Bengaluru
Work from Office
About The Role : :Backend Lead (5+ Years). Who Are We. Syook is an established startup headquartered in Bengaluru, working in the B2B space. We are currently on the path of high growth and productivity, and great culture building (because we know the two go hand in hand)! We are looking for folks who will ask questions like why/why not, what can I do to make this better, how can I help the company and most importantly, what can the company do to help you.. Our flagship product Syook InSite allows businesses to visualize their operations, providing information that can easily translate into measurable impact on the bottom-line. InSite is an Industrial Internet-of-Things (IIoT) solution that uses Bluetooth Low Energy (BLE) beacons to generate highly-accurate location data of all your resources (both assets and people), at much lower costs than comparable technologies. The result:Quick RoI and Improved Operational Performance. It's Industry 4.0, ready for deployment!. Role Overview. First and foremost, we're looking for people who are excited about what we're doing you don't need to know anything about IoT, but should be excited to learn. We're building in an entirely new space, so you'll be able to use creativity to help us solve problems and delight our customers.. Our product stack is MERN (React, Node + Express, MongoDB) with a few services also written in Java and Go and we use React Native on mobile.. We're looking for someone who loves to solve problems and enjoys building algorithms that identify all possible scenarios. You must enjoy breaking down a problem statement into smaller chunks and have a knack of designing scalable systems using modular pieces.. A day in your role will involve any of the following (or a combination) :. Develop APIs for new features that will be consumed by the frontend (web and mobile). Develop APIs for integrations into 3rd party ERP & HRMS systems like SAP, etc. Build new services or add functionality in existing systems & services like IoT Data Parsers and Ingestors, etc. Build modular sub-systems that can be reused to develop a scalable system. Optimize application backend for speed and scalability. Ship out bug free code using TDD. Work towards more stability & scalability of the backend. Build and maintain CI CD pipelines for application deployments. Be part of product planning with the product manager. Take the lead in deciding how to build what needs to be built for scale and ensure the technical feasibility of a product feature. Understand the requirements and give adequate time estimates for the features. Coordinate with QA for every feature and make sure releases are bug free. Mentoring junior developers about the right way to code and contribute. This list is not exhaustive at all, and you'll have a lot of autonomy over your work.. Lead a team of 5 developers, providing guidance and mentorship for technical growth and best practices.. This is a full time position based out of Bengaluru. During the interview, you'll be able to share what you're most interested in.. Why You Might Be Excited About Us. We're working to solve a massive global problem and help organizations be more competitive. We love solving problems using technology and are bridging a massive gap in the operations excellence domain.. We're small, so you'll be able to contribute efficiently and without bureaucracy. You'll quickly have responsibility over big areas of our product.. Our team has a wide range of experiences (Oilfield, Telecom, Psychology PhD, etc.) and are excited to learn from you, too.. You'll be able to work independently and set your own schedule. We don't micromanage and will help you do great work. We trust our people and believe that each person here puts their best foot forward.. We'll mould your role to shape a career you're excited about. We care a ton about your satisfaction and job happiness, and will help prep you for whatever you're looking for in the future.. We work hard and also highly value balanced work/life. We care about family and your own personal development, and don't expect for you to be always engaged with work.. Why You Might Not Be Excited About Us. We're small (35+ people in the company now), so if you like more established companies, it's not (yet) the right time. You'll help build our company's culture.. Since we're an early stage startup, projects and priorities may shift.. Our customers love us, and there's a lot we can improve. It's a great place to be, but it means there's some jank. (Nothing too scary! ). We can't (yet) provide constant close mentorship for junior developers. As we grow, we'll get a lot better at this.. Since you'll have a lot of responsibility and creativity over projects, they may not be defined perfectly initially. You'll be expected to bring your own experience and perspective to help us do the right things, and raise flags if you think we should do things differently.. About You. None of these are requirements, but do describe the kinds of people that we think would be most effective at Syook right now.. Love thinking broadly about problems and thinking creatively about how to solve them efficiently.. Happy to try things out to validate new features, and move on if they no longer solve a problem.. Excited for a front-row seat into a fast growing, early stage company. Things will change a lot!. Enjoy thinking through trade-offs, with both mindfulness of short-term needs and our long-term direction.. Happy writing documentation so that others can ramp up super easily and you're never a single-source-of-failure. We're a bit too small to have silos.. You are driven and care about doing a good job and improving your craft.. You have a growth mindset, can keep up with the latest technology changes and trends and suggest enhancements based on these. Most importantly, you're the kind of person who is friendly, approachable, ready to help others and personally driven to put your best foot these are some of the things we would like for you to have to be able to contribute effectively in this kind of a position :. You are fluent working with server-side languages and frameworks, in particular NodeJS, but any other language proficiency is also fine e.g. Go, Python, Java, as long as you can pick up a new language and contribute. You are fluent in using SQL Database, preferably Postgres and at least one NOSQL Database, preferably MongoDB, but feel free to surprise us!. You have experience in developing backend apps and have put it to production. You can write clean, modular code in either Object-Oriented style or Functional style. You are comfortable with Test Driven Development. You are working knowledge of cloud services (AWS and Azure) like S3, cloudfront, IAM, etc and dev ops and can setup and manage CI CD pipelines (GitLab or Github) for application deployment. You have shipped code to production recently, regularly. You are fluent in using the tools of the trade:Testing, Infrastructure Setup, Code Pipelines, Editors, Git, Command line, Slack, Jira. You can lead a highly driven team and galvanize Syook Engineering in the tech community and position the engineering team for growth. You have a growth mindset, can keep up with the latest technology changes and trends and suggest enhancements based on these. Apart from the above it would be a plus if you also have :. Experience with Docker & Kubernetes. Experience with Kafka, RabbitMQ or other pub / sub and queue management systems. Open source contributions. Our Current Development Practices. Since we're an early-stage startup, we constantly have to ask "what gets the most value, cheaply, to validate our assumptions?" We build some things to last a long time, and others as prototypes.. We use linting, e2e testing, CI CD, observability logging, and production probers. We've documented both our web and mobile apps so that you should be able to get started easily-and if you need help, we'll absolutely improve our docs-and contribute your first day.. We recognize the value of maintainability and keeping our developer experience nimble. Our sprints are for 2 weeks and we push releases to production as per this schedule.. You'll help push us to be our best, and we're excited for recommendations and insights you have as you join. You'll be an owner and contribute towards how we work.. Joining our team. Interview Process. We want you at your best, and won't be giving you gotcha-style algorithms questions. We want to get to know you, hear about what you're interested in, and learn about what you hope to do in the future.. Meet us and learn about Syook :. You'll first talk to Sarlaksha or someone from the People Services team (over phone or video) and won't need to prepare anything in advance. The goal of this conversation is to get to know you and mutually explore if we might be a good fit for each other. You'll learn more about Syook and have a chance to ask any questions about our company, team culture, and product.. If we're both excited to continue, we'll send along a bunch of information about the company that you can go through on our own time. You'll then have the opportunity to chat with other people in our company to learn more about them and the company.. Technical conversation :. We'll have another conversation to talk in depth about your technical experience. We'll talk about frameworks you've used, how you make technical decisions, types of problems you like to solve, etc. You won't need to prepare anything in advance.. This is primarily used to get a better feel for your experience, how you work, and where you may fit in. It'll be used to design the rest of the interview process.. Through this and the next steps, you'll meet more people in the company so we can get to know each other.. Technical challenge :. Everyone has different strengths, and we want you to do your best. Our goal is for you to clearly demonstrate your technical aptitude. We're open to accommodating what would work best for you.. We Can Choose Mutually Between Several Different Options. A live pair coding session where we'll work through some problems.. Working through similar problems independently as a take home challenge.. Presenting any previous work you've done (example:open source, side project, or even another interview you did).. Technical Discussions :. If you clear the technical challenge then we will have a few rounds of technical discussions where we interact with you to understand in depth about your craft. These discussions will be with one of the engineers at Syook whom you will end up working with and the final discussion will be with our CTO (Aman Agarwal). We will try and see how you can augment our engineering culture and we'll discuss the bigger picture also. You will also get to understand what your role is all about and how you can grow with us. Feel free to ask for feedback and any other questions you may have about the company and we will be happy to share the same.. People & Culture fit round :. If you reach this stage, it automatically means that we're convinced of your technical skills. However, that's never enough for us. We want to ensure that you will feel comfortable working with us and that we can give you an environment where you can be your productive best. We will use this discussion to understand what you bring to the table apart from your technical skills in terms of initiatives, personality, and a certain entrepreneurial mindset.. Reference conversations :. We will talk to a few people you've worked with before to learn more about how we can best work with you. We expect to hear great things, so this is primarily so we can work with you as effectively as possible.. Getting Started. If you're excited to learn more, you can directly mail us (ajay@syook.com) . If we think you might be a fit, we'll respond really fast. Please let us know if you have timing constraints. Regardless, we'll try our best to respect your time along the process.. You can learn more about what we're up to :. Our company's website to learn more about the space we're working in, where we write about how IoT is a gamechanger to build smart factories and workspaces.. If you think you're a good fit here, then we should get talking!! If not, all the best with your job search.. (ref:hirist.tech). Show more Show less
Posted 1 month ago
5.0 - 10.0 years
7 - 11 Lacs
Bengaluru
Work from Office
We are seeking a skilled Informatica BDM Engineer with a strong background in Big Data Management to join our team. The ideal candidate will be responsible for designing, developing, and implementing Informatica solutions to manage and analyze large volumes of data effectively. You will play a crucial role in ensuring data integrity, security, and compliance within our organization. Overall Responsibilities Design and Development: Design, develop, and implement solutions using Informatica Big Data Management. Ensure quality and performance of technical and application architecture and design across the organization. Data Management: Work extensively with Oozie scheduling, HQL, Hive, HDFS, and data partitioning to manage large datasets. Collaborate with teams to ensure effective data integration and transformation processes using SQL and NoSQL databases. Security Implementation: Design and implement security systems, identifying gaps in existing architectures and recommending enhancements. Adhere to established policies and best practices regarding data security and compliance. Monitoring and Troubleshooting: Actively monitor distributed services and troubleshoot issues in production environments. Implement resiliency and monitoring solutions to ensure continuous service availability. Agile and DevOps Practices: Participate in Agile methodology, ensuring timely delivery of projects while adhering to CI/CD principles using tools like GitHub and Jenkins. Collaboration and Influence: Work collaboratively with multiple teams to share knowledge and improve productivity. Effectively research and benchmark technologies against best-in-class solutions. Technical Skills Core Skills Informatica BDM: Minimum 5 years of development and design experience. Data Technologies: Extensive knowledge of Oozie, HQL, Hive, HDFS, and data partitioning. Databases: Proficient in SQL and NoSQL databases. Operating Systems: Strong Linux OS configuration skills, including shell scripting. Security and Compliance: Knowledge of designing security controls for data transfers and ETL processes. Understanding of compliance and regulatory requirements, including encryption and data integrity. Networking Basic understanding of networking concepts including DNS, Proxy, ACL, and policy troubleshooting. DevOps & Agile: Familiar with Agile methodologies, CI/CD practices, and tools (GitHub, Jenkins). Experience with distributed services resiliency and monitoring. Experience Minimum 5 years of experience in Informatica Big Data Management. Experience in the Banking, Financial, and Fintech sectors preferred. Proven ability to implement design patterns and security measures in large-scale infrastructures. Qualifications Education: Bachelors or Masters degree in Computer Science or a related field (or equivalent industry experience). Soft Skills Excellent interpersonal and communication skills to effectively present ideas to teams and stakeholders. Strong listening skills with the ability to speak clearly and confidently in front of management and peers. Positive attitude towards work, fostering a climate of trust and collaboration within the team. Enthusiastic and passionate about technology, creating a motivating environment for the team. S YNECHRONS DIVERSITY & INCLUSION STATEMENT Diversity & Inclusion are fundamental to our culture, and Synechron is proud to be an equal opportunity workplace and is an affirmative action employer. Our Diversity, Equity, and Inclusion (DEI) initiative Same Difference is committed to fostering an inclusive culture promoting equality, diversity and an environment that is respectful to all. We strongly believe that a diverse workforce helps build stronger, successful businesses as a global company. We encourage applicants from across diverse backgrounds, race, ethnicities, religion, age, marital status, gender, sexual orientations, or disabilities to apply. We empower our global workforce by offering flexible workplace arrangements, mentoring, internal mobility, learning and development programs, and more. All employment decisions at Synechron are based on business needs, job requirements and individual qualifications, without regard to the applicants gender, gender identity, sexual orientation, race, ethnicity, disabled or veteran status, or any other characteristic protected by law . Candidate Application Notice
Posted 1 month ago
7.0 - 10.0 years
11 - 16 Lacs
Pune, Bengaluru
Work from Office
Job Summary Join Synechron as a Sr. Senior AWS & .NET Cloud Application Developer, where you will play a pivotal role in designing, developing, and maintaining cutting-edge cloud-based applications. This position is essential for delivering innovative solutions that meet our business objectives and client needs, leveraging your skills in AWS, .NET, NoSQL databases, Kafka, and Kubernetes. Your contributions will enhance application performance, scalability, and reliability, driving Synechron's success in the technology landscape. Software Required Software Proficiency: AWS (Advanced experience in cloud application development) .NET (Strong proficiency in C#, ASP.NET, etc.) NoSQL Databases (Experience with MongoDB, DynamoDB, Cassandra) Kafka (Hands-on experience with real-time data pipelines and streaming applications) Kubernetes (Experience in container orchestration and management) Preferred Software Proficiency: Microservices architecture Serverless computing and AWS Lambda Monitoring and logging tools (e.g., Prometheus, Grafana) CI/CD tools and practices Overall Responsibilities Design, develop, and deploy scalable applications on AWS using .NET technologies. Implement and manage NoSQL databases to ensure optimal performance and security. Utilize Kafka for building real-time data pipelines and streaming applications. Deploy and manage containerized applications using Kubernetes. Collaborate with DevOps teams to automate deployment processes and improve CI/CD pipelines. Troubleshoot and resolve technical issues promptly. Participate in code reviews to ensure adherence to best practices and coding standards. Stay updated with the latest industry trends and technologies to enhance application performance and reliability. Technical Skills (By Category) Programming Languages: Essential.NET (C#, ASP.NET) PreferredExperience with serverless computing languages such as AWS Lambda Databases/Data Management: EssentialNoSQL (MongoDB, DynamoDB, Cassandra) Cloud Technologies: EssentialAWS PreferredExperience with serverless computing Frameworks and Libraries: PreferredMicroservices architecture Development Tools and Methodologies: PreferredCI/CD tools and practices Experience 7-10 years of experience in cloud application development. Proven experience in AWS and .NET technologies. Experience with NoSQL databases and Kafka. Industry experience in developing scalable and efficient cloud applications. Alternative pathways could include equivalent roles in cloud development with evidence of similar proficiency. Day-to-Day Activities Engage in the design and deployment of scalable cloud applications. Collaborate with cross-functional teams to deliver project objectives. Attend regular meetings to align on project goals and deliverables. Make informed decisions related to technology solutions and project execution. Manage application performance and provide ongoing optimization. Qualifications Bachelor's degree in Computer Science, Information Technology, or a related field. Preferred certifications in AWS or related cloud technologies. Professional Competencies Strong analytical and problem-solving skills. Excellent communication and teamwork abilities. Adaptability and a continuous learning orientation. An innovative mindset to drive technology solutions. Effective time and priority management skills. S YNECHRONS DIVERSITY & INCLUSION STATEMENT Diversity & Inclusion are fundamental to our culture, and Synechron is proud to be an equal opportunity workplace and is an affirmative action employer. Our Diversity, Equity, and Inclusion (DEI) initiative Same Difference is committed to fostering an inclusive culture promoting equality, diversity and an environment that is respectful to all. We strongly believe that a diverse workforce helps build stronger, successful businesses as a global company. We encourage applicants from across diverse backgrounds, race, ethnicities, religion, age, marital status, gender, sexual orientations, or disabilities to apply. We empower our global workforce by offering flexible workplace arrangements, mentoring, internal mobility, learning and development programs, and more. All employment decisions at Synechron are based on business needs, job requirements and individual qualifications, without regard to the applicants gender, gender identity, sexual orientation, race, ethnicity, disabled or veteran status, or any other characteristic protected by law . Candidate Application Notice
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
40175 Jobs | Dublin
Wipro
19626 Jobs | Bengaluru
Accenture in India
17497 Jobs | Dublin 2
EY
16057 Jobs | London
Uplers
11768 Jobs | Ahmedabad
Amazon
10704 Jobs | Seattle,WA
Oracle
9513 Jobs | Redwood City
IBM
9439 Jobs | Armonk
Bajaj Finserv
9311 Jobs |
Accenture services Pvt Ltd
8745 Jobs |