Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0 years
0 Lacs
Gurugram, Haryana, India
On-site
Xceedance (www.xceedance.com) is a global provider of strategic consulting and managed services, technology, data sciences and blockchain solutions to insurance organizations. Domiciled in Bermuda, with offices in the United States, United Kingdom, Germany, Poland, India, and Australia, Xceedance helps insurers launch new products, drive operations, implement intelligent technology, deploy advanced analytic capabilities, and achieve business process optimization. The experienced insurance professionals at Xceedance enable insurers, reinsurers, brokers, and program administrators worldwide to enhance policyholder service, enter new markets, boost workflow productivity, and improve profitability. Xceedance has achieved phenomenal growth in last many years — a tribute to the knowledge, scope and impact of our people around the world. Everyone is laser focused on delivering value to our customers. We are committed to the communities in which we live and work. We are driven by a culture of innovation and integrity. As a member of the Xceedance team, you can shape a fulfilling career, participate in exciting projects and impact the organization in meaningful ways. Count on strong support to develop skills, grow quickly and meet your professional aspirations. Relish working in a highly collaborative setting that features state-of-the-art resources, modern technology and a comfortable, gratifying environment. Create solutions and fulfill your role alongside highly talented and dynamic colleagues who will motivate you to be agile and extremely productive. And enjoy the advantages of a superior benefits package. Our Mission and Vision The people of Xceedance are unified in the mission to offer exemplary business services and craft market-disruptive solutions for insurance providers worldwide. Position Title: GCP Data Architect As a consulting business for re/insurers, our company strives to: • Deliver solutions and services that promote growth and reinforce relationships • Emphasize attentive, value-based interactions with clients and partners • Provide seamless, consistent business experiences for all constituents • Practice the constructive change and disruption we advocate • Observe the tenets of a learning enterprise Join us if you’re looking for an opportunity to be inspired, challenged, and rewarded! Experience in GCP Data Services. Experience in at least one end to end implementation in GCP using following one or more data services: BigQuery Cloud Storage Dataflow Google Monitoring Dataproc PubSub Work with development teams to design and build cloud-native applications. Evaluate and install cloud services and technologies to fulfill the company’s requirements. Ensure that cloud solutions meet security and compliance standards. Monitor and optimize cloud infrastructure for performance, cost, and security. Provide engineering teams with technical assistance and coaching on GCP best practices. Troubleshoot and resolve issues with cloud infrastructure and services. Develop and maintain documentation for cloud infrastructure, processes, and procedures. Show more Show less
Posted 1 month ago
5.0 - 9.0 years
0 Lacs
Gurugram, Haryana, India
On-site
POSITION - Software Engineer – Data Engineering LOCATION - Bangalore/Mumbai/Kolkata/Gurugram/Hyderabad/Pune/Chennai EXPERIENCE - 5-9 Years ABOUT HASHEDIN We are software engineers who solve business problems with a Product Mindset for leading global organizations. By combining engineering talent with business insight, we build software and products that can create new enterprise value. The secret to our success is a fast-paced learning environment, an extreme ownership spirit, and a fun culture. JOB TITLE: Software Engineer – Data Engineering OVERVIEW OF THE ROLE: As a Data Engineer or Senior Data Engineer, you will be hands-on in architecting, building, and optimizing robust, efficient, and secure data pipelines and platforms that power business critical analytics and applications. You will play a central role in the implementation and automation of scalable batch and streaming data workflows using modern big data and cloud technologies. Working within cross-functional teams, you will deliver well-engineered, high quality code and data models, and drive best practices for data reliability, lineage, quality, and security Mandatory Skills: • Hands-on software coding or scripting for minimum 4 years • Experience in product management for at-least 4 years • Stakeholder management experience for at-least 4 years • Experience in one amongst GCP, AWS or Azure cloud platform Key Responsibilities: • Design, build, and optimize scalable data pipelines and ETL/ELT workflows using Spark (Scala/Python), SQL, and orchestration tools (e.g., Apache Airflow, Prefect, Luigi). • Implement efficient solutions for high-volume, batch, real-time streaming, and eventdriven data processing, leveraging best-in-class patterns and frameworks. • Build and maintain data warehouse and lakehouse architectures (e.g., Snowflake, Databricks, Delta Lake, BigQuery, Redshift) to support analytics, data science, and BI workloads. • Develop, automate, and monitor Airflow DAGs/jobs on cloud or Kubernetes, following robust deployment and operational practices (CI/CD, containerization, infra-as-code). • Write performant, production-grade SQL for complex data aggregation, transformation, and analytics tasks. • Ensure data quality, consistency, and governance across the stack, implementing processes for validation, cleansing, anomaly detection, and reconciliation General Skills & Experience: • Proficiency with Spark (Python or Scala), SQL, and data pipeline orchestration (Airflow, Prefect, Luigi, or similar). • Experience with cloud data ecosystems (AWS, GCP, Azure) and cloud-native services for data processing (Glue, Dataflow, Dataproc, EMR, HDInsight, Synapse, etc.) Hands-on development skills in at least one programming language (Python, Scala, or Java preferred); solid knowledge of software engineering best practices (version control, testing, modularity). • Deep understanding of batch and streaming architectures (Kafka, Kinesis, Pub/Sub, Flink, Structured Streaming, Spark Streaming). • Expertise in data warehouse/lakehouse solutions (Snowflake, Databricks, Delta Lake, BigQuery, Redshift, Synapse) and storage formats (Parquet, ORC, Delta, Iceberg, Avro). • Strong SQL development skills for ETL, analytics, and performance optimization. • Familiarity with Kubernetes (K8s), containerization (Docker), and deploying data pipelines in distributed/cloud-native environments. • Experience with data quality frameworks (Great Expectations, Deequ, or custom validation), monitoring/observability tools, and automated testing. • Working knowledge of data modeling (star/snowflake, normalized, denormalized) and metadata/catalog management. • Understanding of data security, privacy, and regulatory compliance (access management, PII masking, auditing, GDPR/CCPA/HIPAA). • Familiarity with BI or visualization tools (PowerBI, Tableau, Looker, etc.) is an advantage but not core. • Previous experience with data migrations, modernization, or refactoring legacy ETL processes to modern cloud architectures is a strong plus. • Bonus: Exposure to open-source data tools (dbt, Delta Lake, Apache Iceberg, Amundsen, Great Expectations, etc.) and knowledge of DevOps/MLOps processes EDUCATIONAL QUALIFICATIONS : • Bachelor’s or Master’s degree in Computer Science, Data Engineering, Information Systems, or related field (or equivalent experience). • Certifications in cloud platforms (AWS, GCP, Azure) and/or data engineering (AWS Data Analytics, GCP Data Engineer, Databricks). • Experience working in an Agile environment with exposure to CI/CD, Git, Jira, Confluence, and code review processes. • Prior work in highly regulated or large-scale enterprise data environments (finance, healthcare, or similar) is a plus Show more Show less
Posted 1 month ago
5.0 years
0 Lacs
India
Remote
Job Title: Data Engineer Location: Remote Experience: 5+ Years Employment Type: Full-Time Job Description: We are seeking a skilled and motivated Data Engineer with at least 5 years of hands-on experience to join our data team. The ideal candidate will have a strong background in designing and building data pipelines, working with large-scale datasets, and optimizing data workflows in cloud environments. You will play a critical role in ensuring our data infrastructure is robust, scalable, and efficient to support analytics, business intelligence, and machine learning use cases. Key Responsibilities: Design, develop, and maintain reliable data pipelines and ETL processes using modern tools and frameworks Collaborate with data analysts, data scientists, and business stakeholders to understand data requirements and deliver high-quality datasets Work with both structured and unstructured data from diverse sources (APIs, logs, databases, etc.) Ensure data quality, integrity, and compliance across data assets Optimize data processing workflows for performance, scalability, and cost-efficiency Build and maintain data models, data lakes, and data warehouses on cloud platforms Monitor and troubleshoot pipeline failures, ensuring timely resolution of data issues Automate data operations and implement best practices in DevOps for data engineering Required Skills: 5+ years of experience in data engineering or similar roles Proficiency in SQL and at least one programming language (preferably Python or Scala ) Experience with data pipeline tools such as Apache Airflow , DBT , or Luigi Strong knowledge of ETL/ELT processes and data modeling techniques Hands-on experience with cloud data platforms like AWS (S3, Redshift, Glue) , Azure (Data Factory, Synapse) , or GCP (BigQuery, Dataflow) Familiarity with distributed data processing using Spark , Hadoop , or Kafka Experience with data warehouse/lakehouse architecture Version control using Git and CI/CD best practices for data workflows Strong problem-solving, communication, and collaboration skills Ability to work independently in a remote, fast-paced environment Preferred Qualifications: Experience with containerization (Docker, Kubernetes) for data applications Familiarity with data governance , data security , and compliance standards Knowledge of real-time data processing and streaming architectures Exposure to BI tools like Power BI, Tableau, or Looker Show more Show less
Posted 1 month ago
4.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
GCP Data Engineer Location: Chennai, Hyderabad Skills : GCP, Data Flow, Air Flow, Data proc, Python, Big Query with excellent communication Experience – 4-7 Years NP – 0-30 Days Job Description: Overview: Hands on, experienced GCP developers (5+ years of coding, experience with OO Python concepts, pip, version in strong experience working in distributed teams, strong git skills experience working with GCP and especially data pipelines ideally experience with database integrations (not DBAs - experience driving Big Query or other SQL endpoints from Python code) ideally dataflow / Apache Beam experience. Show more Show less
Posted 1 month ago
6.0 years
0 Lacs
Bengaluru, Karnataka
Remote
Senior Software Engineer Bangalore, Karnataka, India + 1 more location Date posted Jun 19, 2025 Job number 1830832 Work site Up to 50% work from home Travel None Role type Individual Contributor Profession Software Engineering Discipline Software Engineering Employment type Full-Time Overview Microsoft is a company where passionate innovators come to collaborate, envision what can be and take their careers further. This is a world of more possibilities, more innovation, more openness, and the sky is the limit thinking in a cloud-enabled world. Microsoft’s Azure Data engineering team is leading the transformation of analytics in the world of data with products like databases, data integration, big data analytics, messaging & real-time analytics, and business intelligence. The products our portfolio include Microsoft Fabric, Azure SQL DB, Azure Cosmos DB, Azure PostgreSQL, Azure Data Factory, Azure Synapse Analytics, Azure Service Bus, Azure Event Grid, and Power BI. Our mission is to build the data platform for the age of AI, powering a new class of data-first applications and driving a data culture. Within Azure Data, the data integration team builds data gravity on the Microsoft Cloud. Massive volumes of data are generated – not just from transactional systems of record, but also from the world around us. Our data integration products – Azure Data Factory and Power Query make it easy for customers to bring in, clean, shape, and join data, to extract intelligence. Microsoft’s Azure Data engineering team is leading the transformation of analytics in the world of data with products like databases, data integration, big data analytics, messaging & real-time analytics, and business intelligence. The products our portfolio include Microsoft Fabric, Azure SQL DB, Azure Cosmos DB, Azure PostgreSQL, Azure Data Factory, Azure Synapse Analytics, Azure Service Bus, Azure Event Grid, and Power BI. Our mission is to build the data platform for the age of AI, powering a new class of data-first applications and driving a data culture. We’re the team that developed the Mashup Engine (M) and Power Query. We already ship monthly to millions of users across Excel, Power/Pro BI, Flow, and PowerApps; but in many ways we’re just getting started. We’re building new services, experiences, and engine capabilities that will broaden the reach of our technologies to several new areas – data “intelligence”, large-scale data analytics, and automated data integration workflows. We plan to use example-based interaction, machine learning, and innovative visualization to make data access and transformation even more intuitive for non-technical users. We do not just value differences or different perspectives. We seek them out and invite them in so we can tap into the collective power of everyone in the company. As a result, our customers are better served. Qualifications Required /Minimum Qualifications • Bachelor's Degree in Computer Science, or related technical discipline AND 6+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience. Experience in data integration or migrations or ELT or ETL tooling is mandatory Other Requirements Ability to meet Microsoft, customer and/or government security screening requirements are required for this role. These requirements include, but are not limited to the following specialized security screenings: Microsoft Cloud Background Check: This position will be required to pass the Microsoft Cloud background check upon hire/transfer and every two years thereafter. Preferred/Additional Qualifications • BS degree in Computer Science Engine role: familiarity with data access technologies (e.g. ODBC, JDBC, OLEDB, ADO.Net, OData), query languages (e.g. T-SQL, Spark SQL, Hive, MDX, DAX), query generation/optimization, OLAP UI role: familiarity with JavaScript, TypeScript, CSS, React, Redux, webpack Service role: familiarity with micro-service architectures, Docker, Service Fabric, Azure blobs/tables/databases, high throughput services Full-stack role: a mix of the qualifications for the UX/service/backend roles Equal Opportunity Employer (EOP) #azdat #azuredata #azdat #azuredata #microsoftfabric #dataintegration Responsibilities • Engine layer: designing and implementing components for dataflow orchestration, distributed querying, query translation, connecting to external data sources, and script parsing/interpretation Service layer: designing and implementing infrastructure for a containerized, micro services based, high throughput architecture UI layer: designing and implementing performant, engaging web user interfaces for data visualization/exploration/transformation/connectivity and dataflow management Embody our culture and values Benefits/perks listed below may vary depending on the nature of your employment with Microsoft and the country where you work. Industry leading healthcare Educational resources Discounts on products and services Savings and investments Maternity and paternity leave Generous time away Giving programs Opportunities to network and connect Microsoft is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations.
Posted 1 month ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Equifax is where you can power your possible. If you want to achieve your true potential, chart new paths, develop new skills, collaborate with bright minds, and make a meaningful impact, we want to hear from you. Equifax is seeking creative, high-energy and driven software engineers with hands-on development skills to work on a variety of meaningful projects. Our software engineering positions provide you the opportunity to join a team of talented engineers working with leading-edge technology. You are ideal for this position if you are a forward-thinking, committed, and enthusiastic software engineer who is passionate about technology. What You’ll Do Design, develop, and operate high scale applications across the full engineering stack Design, develop, test, deploy, maintain, and improve software. Apply modern software development practices (serverless computing, microservices architecture, CI/CD, infrastructure-as-code, etc.) Work across teams to integrate our systems with existing internal systems, Data Fabric, CSA Toolset. Participate in technology roadmap and architecture discussions to turn business requirements and vision into reality. Participate in a tight-knit, globally distributed engineering team. Triage product or system issues and debug/track/resolve by analyzing the sources of issues and the impact on network, or service operations and quality. Manage sole project priorities, deadlines, and deliverables. Research, create, and develop software applications to extend and improve on Equifax Solutions Collaborate on scalability issues involving access to data and information. Actively participate in Sprint planning, Sprint Retrospectives, and other team activity What Experience You Need Bachelor's degree or equivalent experience 5+ years of software engineering experience 5+ years experience writing, debugging, and troubleshooting code in mainstream Java, SpringBoot, TypeScript/JavaScript, HTML, CSS 5+ years experience with Cloud technology: GCP, AWS, or Azure 5+ years experience designing and developing cloud-native solutions 5+ years experience designing and developing microservices using Java, SpringBoot, GCP SDKs, GKE/Kubernetes 5+ years experience deploying and releasing software using Jenkins CI/CD pipelines, understand infrastructure-as-code concepts, Helm Charts, and Terraform constructs What could set you apart Self-starter that identifies/responds to priority shifts with minimal supervision. Experience designing and developing big data processing solutions using Dataflow/Apache Beam, Bigtable, BigQuery, PubSub, GCS, Composer/Airflow, and others UI development (e.g. HTML, JavaScript, Angular and Bootstrap) Experience with backend technologies such as JAVA/J2EE, SpringBoot, SOA and Microservices Source code control management systems (e.g. SVN/Git, Github) and build tools like Maven & Gradle. Agile environments (e.g. Scrum, XP) Relational databases (e.g. SQL Server, MySQL) Atlassian tooling (e.g. JIRA, Confluence, and Github) Developing with modern JDK (v1.7+) Automated Testing: JUnit, Selenium, LoadRunner, SoapUI We offer a hybrid work setting, comprehensive compensation and healthcare packages, attractive paid time off, and organizational growth potential through our online learning platform with guided career tracks. Are you ready to power your possible? Apply today, and get started on a path toward an exciting new career at Equifax, where you can make a difference! Who is Equifax? At Equifax, we believe knowledge drives progress. As a global data, analytics and technology company, we play an essential role in the global economy by helping employers, employees, financial institutions and government agencies make critical decisions with greater confidence. We work to help create seamless and positive experiences during life’s pivotal moments: applying for jobs or a mortgage, financing an education or buying a car. Our impact is real and to accomplish our goals we focus on nurturing our people for career advancement and their learning and development, supporting our next generation of leaders, maintaining an inclusive and diverse work environment, and regularly engaging and recognizing our employees. Regardless of location or role, the individual and collective work of our employees makes a difference and we are looking for talented team players to join us as we help people live their financial best. Equifax is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran. Show more Show less
Posted 1 month ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Equifax is seeking creative, high-energy and driven software engineers with hands-on development skills to work on a variety of meaningful projects. Our software engineering positions provide you the opportunity to join a team of talented engineers working with leading-edge technology. You are ideal for this position if you are a forward-thinking, committed, and enthusiastic software engineer who is passionate about technology. What You’ll Do Design, develop, and operate high scale applications across the full engineering stack Design, develop, test, deploy, maintain, and improve software. Apply modern software development practices (serverless computing, microservices architecture, CI/CD, infrastructure-as-code, etc.) Work across teams to integrate our systems with existing internal systems, Data Fabric, CSA Toolset. Participate in technology roadmap and architecture discussions to turn business requirements and vision into reality. Participate in a tight-knit, globally distributed engineering team. Triage product or system issues and debug/track/resolve by analyzing the sources of issues and the impact on network, or service operations and quality. Manage sole project priorities, deadlines, and deliverables. Research, create, and develop software applications to extend and improve on Equifax Solutions Collaborate on scalability issues involving access to data and information. Actively participate in Sprint planning, Sprint Retrospectives, and other team activity What Experience You Need Bachelor's degree or equivalent experience 5+ years of software engineering experience 5+ years experience writing, debugging, and troubleshooting code in mainstream Java, SpringBoot, TypeScript/JavaScript, HTML, CSS 5+ years experience with Cloud technology: GCP, AWS, or Azure 5+ years experience designing and developing cloud-native solutions 5+ years experience designing and developing microservices using Java, SpringBoot, GCP SDKs, GKE/Kubernetes 5+ years experience deploying and releasing software using Jenkins CI/CD pipelines, understand infrastructure-as-code concepts, Helm Charts, and Terraform constructs What could set you apart Self-starter that identifies/responds to priority shifts with minimal supervision. Experience designing and developing big data processing solutions using Dataflow/Apache Beam, Bigtable, BigQuery, PubSub, GCS, Composer/Airflow, and others UI development (e.g. HTML, JavaScript, Angular and Bootstrap) Experience with backend technologies such as JAVA/J2EE, SpringBoot, SOA and Microservices Source code control management systems (e.g. SVN/Git, Github) and build tools like Maven & Gradle. Agile environments (e.g. Scrum, XP) Relational databases (e.g. SQL Server, MySQL) Atlassian tooling (e.g. JIRA, Confluence, and Github) Developing with modern JDK (v1.7+) Automated Testing: JUnit, Selenium, LoadRunner, SoapUI Show more Show less
Posted 1 month ago
5.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Company Description Forbes Advisor is a new initiative for consumers under the Forbes Marketplace umbrella that provides journalist- and expert-written insights, news and reviews on all things personal finance, health, business, and everyday life decisions. We do this by providing consumers with the knowledge and research they need to make informed decisions they can feel confident in, so they can get back to doing the things they care about most. The Data Research Engineering Team is a brand new team with the purpose of managing data from acquisition to presentation, collaborating with other teams while also operating independently. Their responsibilities include acquiring and integrating data, processing and transforming it, managing databases, ensuring data quality, visualizing data, automating processes, working with relevant technologies, and ensuring data governance and compliance. They play a crucial role in enabling data-driven decision-making and meeting the organization's data needs. A typical day in the life of a Database Engineer/Developer will involve designing, developing, and maintaining a robust and secure database infrastructure to efficiently manage company data. They collaborate with cross-functional teams to understand data requirements and migrate data from spreadsheets or other sources to relational databases or cloud-based solutions like Google BigQuery and AWS. They develop import workflows and scripts to automate data import processes, optimize database performance, ensure data integrity, and implement data security measures. Their creativity in problem-solving and continuous learning mindset contribute to improving data engineering processes. Proficiency in SQL, database design principles, and familiarity with Python programming are key qualifications for this role. Responsibilities Job Description Design, develop, and maintain the database infrastructure to store and manage company data efficiently and securely. Work with databases of varying scales, including small-scale databases, and databases involving big data processing. Work on data security and compliance, by implementing access controls, encryption, and compliance standards. Collaborate with cross-functional teams to understand data requirements and support the design of the database architecture. Migrate data from spreadsheets or other sources to a relational database system (e.g., PostgreSQL, MySQL) or cloud-based solutions like Google BigQuery. Develop import workflows and scripts to automate the data import process and ensure data accuracy and consistency. Optimize database performance by analyzing query execution plans, implementing indexing strategies, and improving data retrieval and storage mechanisms. Work with the team to ensure data integrity and enforce data quality standards, including data validation rules, constraints, and referential integrity. Monitor database health and identify and resolve issues. Collaborate with the full-stack web developer in the team to support the implementation of efficient data access and retrieval mechanisms. Implement data security measures to protect sensitive information and comply with relevant regulations. Demonstrate creativity in problem-solving and contribute ideas for improving data engineering processes and workflows. Embrace a learning mindset, staying updated with emerging database technologies, tools, and best practices. Explore third-party technologies as alternatives to legacy approaches for efficient data pipelines. Familiarize yourself with tools and technologies used in the team's workflow, such as Knime for data integration and analysis. Use Python for tasks such as data manipulation, automation, and scripting. Collaborate with the Data Research Engineer to estimate development efforts and meet project deadlines. Assume accountability for achieving development milestones. Prioritize tasks to ensure timely delivery, in a fast-paced environment with rapidly changing priorities. Collaborate with and assist fellow members of the Data Research Engineering Team as required. Perform tasks with precision and build reliable systems. Leverage online resources effectively like StackOverflow, ChatGPT, Bard, etc., while considering their capabilities and limitations. Skills And Experience Bachelor's degree in Computer Science, Information Systems, or a related field is desirable but not essential. Experience with data warehousing concepts and tools (e.g., Snowflake, Redshift) to support advanced analytics and reporting, aligning with the team’s data presentation goals. Skills in working with APIs for data ingestion or connecting third-party systems, which could streamline data acquisition processes. Proficiency with tools like Prometheus, Grafana, or ELK Stack for real-time database monitoring and health checks beyond basic troubleshooting. Familiarity with continuous integration/continuous deployment (CI/CD) tools (e.g., Jenkins, GitHub Actions). Deeper expertise in cloud platforms (e.g., AWS Lambda, GCP Dataflow) for serverless data processing or orchestration. Knowledge of database development and administration concepts, especially with relational databases like PostgreSQL and MySQL. Knowledge of Python programming, including data manipulation, automation, and object-oriented programming (OOP), with experience in modules such as Pandas, SQLAlchemy, gspread, PyDrive, and PySpark. Knowledge of SQL and understanding of database design principles, normalization, and indexing. Knowledge of data migration, ETL (Extract, Transform, Load) processes, or integrating data from various sources. Knowledge of cloud-based databases, such as AWS RDS and Google BigQuery. Eagerness to develop import workflows and scripts to automate data import processes. Knowledge of data security best practices, including access controls, encryption, and compliance standards. Strong problem-solving and analytical skills with attention to detail. Creative and critical thinking. Strong willingness to learn and expand knowledge in data engineering. Familiarity with Agile development methodologies is a plus. Experience with version control systems, such as Git, for collaborative development. Ability to thrive in a fast-paced environment with rapidly changing priorities. Ability to work collaboratively in a team environment. Good and effective communication skills. Comfortable with autonomy and ability to work independently. Qualifications 5+ Years exp in Database Engineering. Additional Information Perks: Day off on the 3rd Friday of every month (one long weekend each month) Monthly Wellness Reimbursement Program to promote health well-being Monthly Office Commutation Reimbursement Program Paid paternity and maternity leaves Show more Show less
Posted 1 month ago
3.0 - 5.0 years
5 - 8 Lacs
Hyderabad
Work from Office
Primary skills- GCP, Python CODING MUST, SQL Coding skills, Big Query, Dataflow, Airflow, Kafka and Airflow Dags . Bachelors Degree or equivalent experience in Computer Science or related field Required- Immediate or 15 days Job Description 3+ years experience as a software engineer or equivalent designing large data-heavy distributed systems and/or high-traffic web-apps Experience in at least one programming language (Python-2 yrs strong coding is must) or java. Hands-on experience designing & managing large data models, writing performant SQL queries, and working with large datasets and related technologies Experience designing & interacting with APIs (REST/GraphQL) Experience working with cloud platforms such as GCP, Big Query Experience in DevOps processes/tooling (CI/CD, GitHub Actions), using version control systems (Git strongly preferred), and working in a remote software development environment Strong analytical, problem solving and interpersonal skills, have a hunger to learn, and the ability to operate in a self-guided manner in a fast-paced rapidly changing environment Preferred: Experience using infrastructure as code frameworks (Terraform) Preferred: Experience using big data tools such as Spark/PySpark Preferred: Experience using or deploying MLOps systems/tooling (eg. MLFlow) Must have: Experience in pipeline orchestration (eg. Airflow) Must Have Experience in Data Flow 1 yr experience atleast Preferred: Experience using infrastructure as code frameworks (Terraform) Preferred: Experience in an additional programming language (JavaScript, Java, etc) Preferred: Experience using data science/machine learning technologies.
Posted 1 month ago
7.0 years
0 Lacs
Kochi, Kerala, India
On-site
Role Description This is a full-time on-site Data Architect role located in Kochi at AMUS HIRING. The Data Architect will be responsible for tasks related to Data Governance, Data Architecture, Data Modeling, Extract Transform Load (ETL), and Data Warehousing. The role involves designing and managing data structures to support business needs and ensuring data quality and security. Qualifications Data Governance and Data Architecture skills Data Modeling and ETL skills Data Warehousing expertise Experience in designing and implementing data solutions Proficiency with database management systems Strong analytical and problem-solving skills Bachelor's degree in Computer Science, Information Technology, or related field Primary Skills: • 7+ years of experience in data architecture, with at least 3 yeras in GCP environment • Expertise in BigQuery, Cloud Dataflow, Cloud Pub/Sub, Cloud storage and related GCP services Google cloud certification is preferred immediate joiner only 10+yrs of experience Show more Show less
Posted 1 month ago
10.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
Introduction We are looking for candidates with 10 +years of experience in data architect role. Responsibilities include: • Lead the design and development of data pipelines with BigQuery, Dataflow, and Cloud Storage. • Architect and implement data lakes, data warehouses, and real-time data processing solutions on GCP. • Ensure data architecture aligns with business goals, go • Design and implement scalable, secure, and cost-effective data architectures using GCP. vernance, and compliance requirements. • Collaborate with stakeholders to define data strategy and roadmap. • Design and deploy BigQuery solutions for optimized performance and cost efficiency. Build and maintain ETL/ELT pipelines for large-scale data processing. • Leverage Cloud Pub/Sub, Dataflow, and Cloud Functions for real-time data integration. • Implement best practices for data security, privacy, and compliance in cloud environments. • Integrate machine learning workflows with data pipelines and analytics tools. • Define data governance frameworks and manage data lineage. • Lead data modeling efforts to ensure consistency, accuracy, and performance across systems. • Optimize cloud infrastructure for scalability, performance, and reliability. • Mentor junior team members and ensure adherence to architectural standards. • Collaborate with DevOps teams to implement Infrastructure as Code (Terraform, Cloud Deployment Manager). • Ensure high availability and disaster recovery solutions are built into data systems. • Conduct technical reviews, audits, and performance tuning for data solutions. • Design solutions for multi-region and multi-cloud data architecture. • Stay updated on emerging technologies and trends in data engineering and GCP. • Drive innovation in data architecture, recommending new tools and services on GCP. Certifications : • Google Cloud Certification is Preferred. Primary Skills : • 7+ years of experience in data architecture, with at least 3 years in GCP environments. • Expertise in BigQuery, Cloud Dataflow, Cloud Pub/Sub, Cloud Storage, and related GCP services. • Strong experience in data warehousing, data lakes, and real-time data pipelines. • Proficiency in SQL, Python, or other data processing languages. • Experience with cloud security, data governance, and compliance frameworks. • Strong problem-solving skills and ability to architect solutions for complex data environments. • Google Cloud Certification (Professional Data Engineer, Professional Cloud Architect) preferred. • Leadership experience and ability to mentor technical teams. • Excellent communication and collaboration skills. Show more Show less
Posted 1 month ago
0 years
0 Lacs
Kolkata, West Bengal, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. JD for L&A Business Consultant Working as part of the Consulting team, you will take part in engagements related to a wide range of topics. Some examples of domains in which you will support our clients include the following: Proficient in Individual and Group Life Insurance concepts, different type of Annuity products etc. Proficient in different insurance plans - Qualified/Non-Qualified Plans, IRA, Roth IRA, CRA, SEP Solid knowledge on the Policy Life cycle Illustrations/Quote/Rating New Business & Underwriting Policy Servicing and Administration Billing & Payment Claims Processing Disbursement (Systematic withdrawals, RMD, Surrenders) Regulatory Changes & Taxation Understanding of business rules of Pay-out Demonstrated ability of Insurance Company Operations like Nonforfeiture option/ Face amount increase, decrease/ CVAT or GPT calculations /Dollar cost averaging and perform their respective transactions. Understanding on upstream and downstream interfaces for policy lifecycle Consulting Skills – Experience in creating business process map for future state architecture, creating WBS for overall conversion strategy, requirement refinement process in multi-vendor engagement. Worked on multiple Business transformation and modernization programs. Conducted multiple Due-Diligence and Assessment projects as part of Transformation roadmaps to evaluate current state maturity, gaps in functionalities and COTs solution features. Requirements Gathering, Elicitation –writing BRDs, FSDs. Conducting JAD sessions and Workshops to capture requirements and working close with Product Owner. Work with the client to define the most optimal future state operational process and related product configuration. Define scope by providing innovative solutions and challenging all new client requirements and change requests but simultaneously ensuring that client gets the required business value. Elaborate and deliver clearly defined requirement documents with relevant dataflow and process flow diagrams. Work closely with product design development team to analyse and extract functional enhancements. Provide product consultancy and assist the client with acceptance criteria gathering and support throughout the project life cycle. Technology Skills - Proficient in technology solution architecture, with a focus on designing innovative and effective solutions. Experienced in data migration projects, ensuring seamless transfer of data between systems while maintaining data integrity and security. Skilled in data analytics, utilizing various tools and techniques to extract insights and drive informed decision-making. Strong understanding of data governance principles and best practices, ensuring data quality and compliance. Collaborative team player, able to work closely with stakeholders and technical teams to define requirements and implement effective solutions. Industry certifications (AAPA/LOMA) will be added advantage. Experience on these COTS product is preferrable. FAST ALIP OIPA wmA We expect you to work effectively as a team member and build good relationships with the client. You will have the opportunity to expand your domain knowledge and skills and will be able to collaborate frequently with other EY professionals with a wide variety of expertise. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less
Posted 1 month ago
5.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Data Scientist - Retail & E-commerce Analytics with Personalization, Campaigns & GCP/BigQuery Expertise We are looking for a skilled Data Scientist with strong expertise in Retail & E-commerce Analytics , particularly in personalization , campaign optimization , and Generative AI (GenAI) , along with hands-on experience working with Google Cloud Platform (GCP) and BigQuery . The ideal candidate will use data science methodologies and advanced machine learning techniques to drive personalized customer experiences, optimize marketing campaigns, and create innovative solutions for the retail and e-commerce business. This role will also involve working with large-scale datasets on GCP and performing high-performance analytics using BigQuery . Responsibilities E-commerce Analytics & Personalization : Develop and implement machine learning models for personalized recommendations , product search optimization , and customer segmentation to improve the online shopping experience. Analyze customer behavior data to create tailored experiences that drive engagement, conversions, and customer lifetime value. Build recommendation systems using collaborative filtering , content-based filtering , and hybrid approaches. Use predictive modeling techniques to forecast customer behavior, sales trends, and optimize inventory management. Campaign Optimization Analyze and optimize digital marketing campaigns across various channels (email, social media, display ads, etc.) using statistical analysis and A/B testing methodologies. Build predictive models to measure campaign performance, improving targeting, content, and budget allocation. Utilize customer data to create hyper-targeted campaigns that increase customer acquisition, retention, and conversion rates. Evaluate customer interactions and campaign performance to provide insights and strategies for future optimization. Generative AI (GenAI) & Innovation Use Generative AI (GenAI) techniques to dynamically generate personalized content for marketing, such as product descriptions, email content, and banner designs. Leverage Generative AI to synthesize synthetic data, enhance existing datasets, and improve model performance. Work with teams to incorporate GenAI solutions into automated customer service chatbots, personalized product recommendations, and digital content creation. Big Data Analytics With GCP & BigQuery Leverage Google Cloud Platform (GCP) for scalable data processing, machine learning, and advanced analytics. Utilize BigQuery for large-scale data querying, processing, and building data pipelines, allowing efficient data handling and analytics at scale. Optimize data workflows on GCP using tools like Cloud Storage , Cloud Functions , Cloud Dataproc , and Dataflow to ensure data is clean, reliable, and accessible for analysis. Collaborate with engineering teams to maintain and optimize data infrastructure for real-time and batch data processing in GCP. Data Analysis & Insights Perform data analysis across customer behavior, sales, and marketing datasets to uncover insights that drive business decisions. Develop interactive reports and dashboards using Google Data Studio to visualize key performance metrics and findings. Provide actionable insights on key e-commerce KPIs such as conversion rate , average order value (AOV) , customer lifetime value (CLV) , and cart abandonment rate . Collaboration & Cross-Functional Engagement Work closely with marketing, product, and technical teams to ensure that data-driven insights are used to inform business strategies and optimize retail e-commerce operations. Communicate findings and technical concepts effectively to stakeholders, ensuring they are actionable and aligned with business goals. Key Technical Skills Machine Learning & Data Science : Proficiency in Python or R for data manipulation, machine learning model development (scikit-learn, XGBoost, LightGBM), and statistical analysis. Experience building recommendation systems and personalization algorithms (e.g., collaborative filtering, content-based filtering). Familiarity with Generative AI (GenAI) technologies, including transformer models (e.g., GPT), GANs , and BERT for content generation and data augmentation. Knowledge of A/B testing and multivariate testing for campaign analysis and optimization. Big Data & Cloud Analytics Hands-on experience with Google Cloud Platform (GCP) , specifically BigQuery for large-scale data analytics and querying. Familiarity with BigQuery ML for running machine learning models directly in BigQuery. Experience working with GCP tools like Cloud Dataproc , Cloud Functions , Cloud Storage , and Dataflow to build scalable and efficient data pipelines. Expertise in SQL for data querying, analysis, and optimization of data workflows in BigQuery . E-commerce & Retail Analytics Strong understanding of e-commerce metrics such as conversion rates , AOV , CLV , and cart abandonment . Experience with analytics tools like Google Analytics , Adobe Analytics , or similar platforms for web and marketing data analysis. Data Visualization & Reporting Proficiency in data visualization tools like Tableau , Power BI , or Google Data Studio to create clear, actionable insights for business teams. Experience developing dashboards and reports that monitor KPIs and e-commerce performance. Desired Qualifications Bachelor's or Master's degree in Computer Science , Data Science , Statistics , Engineering , or related fields. 5+ years of experience in data science , machine learning , and e-commerce analytics , with a strong focus on personalization , campaign optimization , and Generative AI . Hands-on experience working with GCP and BigQuery for data analytics, processing, and machine learning at scale. Proven experience in a client-facing role or collaborating cross-functionally with product, marketing, and technical teams to deliver data-driven solutions. Strong problem-solving abilities, with the ability to analyze large datasets and turn them into actionable insights for business growth. Show more Show less
Posted 1 month ago
8.0 - 13.0 years
27 - 42 Lacs
Kolkata, Hyderabad, Pune
Work from Office
About Client Hiring for One of the Most Prestigious Multinational Corporations Job Title: Senior GCP Data Engineer Experience: 8 to 13 years Key Responsibilities : Design, build, and maintain scalable and reliable data pipelines on Google Cloud Platform (GCP) . Develop ETL/ELT workflows using Cloud Dataflow , Apache Beam , Dataproc , BigQuery , and Cloud Composer (Airflow). Optimize performance of data processing and storage solutions (e.g., BigQuery, Cloud Storage). Collaborate with data analysts, data scientists, and business stakeholders to deliver data-driven insights. Design and implement data lake and data warehouse solutions following best practices. Ensure data quality, security, and governance across GCP environments. Implement CI/CD pipelines for data engineering workflows using tools like Cloud Build , GitLab CI , or Jenkins . Monitor and troubleshoot data jobs, ensuring reliability and timeliness of data delivery. Mentor junior engineers and participate in architectural design discussions. Technical Skills: Strong experience in Google Cloud Platform (GCP) data services: BigQuery , Dataflow , Dataproc , Pub/Sub , Cloud Storage , Cloud Functions Proficiency in Python and/or Java for data processing. Strong knowledge of SQL and performance tuning in large-scale environments. Hands-on experience with Apache Beam , Apache Spark , and Airflow . Solid understanding of data modeling , data warehousing , and streaming/batch processing . Experience with CI/CD , Git, and modern DevOps practices for data workflows. Familiarity with data security and compliance in cloud environments. NOTE : Only immediate and 15 days joiners Notice period : Only immediate and 15 days joiners Location: Pune, Chennai. Hyderabad, Kolkata Mode of Work : WFO(Work From Office) Thanks & Regards, SWETHA Black and White Business Solutions Pvt.Ltd. Bangalore,Karnataka,INDIA. Contact Number:8067432433 rathy@blackwhite.in |www.blackwhite.in
Posted 1 month ago
8.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
🚨 We’re Hiring | GCP Data Engineer (Full-Time) 📍 Location: Chennai (Onsite) 📅 Experience: 8+ Years 💼 Notice period : 0 days 💰 Budget: Based on Experience Are you a data-driven engineer with strong hands-on experience in GCP, Python, and Big Data technologies? We’re looking for seasoned professionals to join our growing team in Chennai! 🔧 Key Skills & Experience Required: ✅ 2+ Years in GCP services: BigQuery, Dataflow, Dataproc, DataPlex, DataFusion, Cloud SQL, Cloud Storage, Redis Memory ✅ 2+ Years in Terraform, Tekton, Data Transfer Utilities ✅ 2+ Years in Git or other version control tools ✅ 2+ Years in Confluent Kafka ✅ 1+ Year in API Development ✅ 2+ Years working in Agile Frameworks ✅ 4+ Years in Python & PySpark development ✅ 4+ Years in Shell Scripting for data import/export 💡 Bonus Skills: Cloud Run DataForm Airflow Agile Software Development Methodologies 🎓 Education: Bachelor’s Degree (required) If you're passionate about data engineering and cloud-native development and ready to work on challenging, large-scale solutions — we want to hear from you! 📩 Apply Here!: rajesh@reveilletechnologies.com ./ Show more Show less
Posted 1 month ago
7.0 years
0 Lacs
India
Remote
Lemongrass Consulting (www.lemongrassconsulting.com) is the leading professional and managed service Lemongrass (lemongrasscloud.com) is a global leader in SAP consulting, focused on helping organizations transform their business processes through innovative solutions and technologies. With a strong commitment to customer success, Lemongrass partners with companies to drive their digital transformation journeys, enabling them to unlock the full potential of their SAP investments. We do this with our continuous innovation, automation, migration and operation, delivered on the world's most comprehensive cloud platforms – AWS, Azure and GCP and SAP Cloud ERP. We have been working with AWS and SAP since 2010 and we are a Premier Amazon Partner Network (APN) Consulting Partner. We are also a Microsoft Gold Partner, a Google Cloud Partner and an SAP Certified Silver Partner. Our team is what makes Lemongrass exceptional and why we have the excellent reputation in the market that we enjoy today. At Lemongrass, you will work with the smartest and most motivated people in the business. We take pride in our culture of innovation and collaboration that drives us to deliver exceptional benefits to our clients every day. About the Role: We are seeking an experienced Cloud Data Engineer with a strong background in AWS, Azure, and GCP. The ideal candidate will have extensive experience with cloud-native ETL tools such as AWS DMS, AWS Glue, Kafka, Azure Data Factory, GCP Dataflow, and other ETL tools like Informatica, SAP Data Intelligence, etc. You will be responsible for designing, implementing, and maintaining robust data pipelines and building scalable data lakes. Experience with various data platforms like Redshift, Snowflake, Databricks, Synapse, Snowflake and others is essential. Familiarity with data extraction from SAP or ERP systems is a plus. Key Responsibilities: • Design and Development: • Design, develop, and maintain scalable ETL pipelines using cloud-native tools (AWS DMS, AWS Glue, Kafka, Azure Data Factory, GCP Dataflow, etc.). • Architect and implement data lakes and data warehouses on cloud platforms (AWS, Azure, GCP). • Develop and optimize data ingestion, transformation, and loading processes using Databricks, Snowflake, Redshift, BigQuery and Azure Synapse. • Implement ETL processes using tools like Informatica, SAP Data Intelligence, and others. Develop and optimize data processing jobs using Spark Scala. • Data Integration and Management: • Integrate various data sources, including relational databases, APIs, unstructured data, and ERP systems into the data lake. • Ensure data quality and integrity through rigorous testing and validation. • Perform data extraction from SAP or ERP systems when necessary. • Performance Optimization: • Monitor and optimize the performance of data pipelines and ETL processes. • Implement best practices for data management, including data governance, security, and compliance. • Collaboration and Communication: • Work closely with data scientists, analysts, and other stakeholders to understand data requirements and deliver solutions. • Collaborate with cross-functional teams to design and implement data solutions that meet business needs. • Documentation and Maintenance: • Document technical solutions, processes, and workflows. • Maintain and troubleshoot existing ETL pipelines and data integrations. Qualifications: • Education: • Bachelor’s degree in Computer Science, Information Technology, or a related field. Advanced degrees are a plus. • Experience: • 7+ years of experience as a Data Engineer or in a similar role. • Proven experience with cloud platforms: AWS, Azure, and GCP. • Hands-on experience with cloud-native ETL tools such as AWS DMS, AWS Glue, Kafka, Azure Data Factory, GCP Dataflow, etc. • Experience with other ETL tools like Informatica, SAP Data Intelligence, etc. • Experience in building and managing data lakes and data warehouses. • Proficiency with data platforms like Redshift, Snowflake, BigQuery, Databricks, and Azure Synapse. • Experience with data extraction from SAP or ERP systems is a plus. Strong experience with Spark and Scala for data processing. • Skills: • Strong programming skills in Python, Java, or Scala. • Proficient in SQL and query optimization techniques. • Familiarity with data modeling, ETL/ELT processes, and data warehousing concepts. • Knowledge of data governance, security, and compliance best practices. • Excellent problem-solving and analytical skills. • Strong communication and collaboration skills. Preferred Qualifications: • Experience with other data tools and technologies such as Apache Spark, or Hadoop. • Certifications in cloud platforms (AWS Certified Data Analytics – Specialty, Google Professional Data Engineer, Microsoft Certified: Azure Data Engineer Associate). Experience with CI/CD pipelines and DevOps practices for data engineering What we offer in return: Remote Working: Lemongrass always has been and always will offer 100% remote work Flexibility: Work where and when you like most of the time Training: A subscription to A Cloud Guru and generous budget for taking certifications and other resources you’ll find helpful State of the art tech : An opportunity to learn and run the latest industry standard tools Team: Colleagues who will challenge you giving the chance to learn from them and them from you Selected applicant will be subject to a background investigation, which will be conducted and the results of which will be used in compliance with applicable law. Lemongrass Consulting is an Equal Opportunity/Affirmative Action employer. All qualified candidates will receive consideration for employment without regard to disability, protected veteran status, race, color, religious creed, national origin, citizenship, marital status, sex, sexual orientation/gender identity, age, or genetic information. Selected applicant will be subject to a background investigation. Show more Show less
Posted 1 month ago
1.0 - 4.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Summary: We are looking for a passionate and detail-oriented ETL Developer with 1 to 4 years of experience in building, testing, and maintaining ETL processes. The ideal candidate should have a strong understanding of data warehousing concepts, ETL tools, and database technologies. Key Responsibilities: ✅ Design, develop, and maintain ETL workflows and processes using \[specify tools e.g., Informatica / Talend / SSIS / Pentaho / custom ETL frameworks]. ✅ Understand data requirements and translate them into technical specifications and ETL designs. ✅ Optimize and troubleshoot ETL processes for performance and scalability. ✅ Ensure data quality, integrity, and security across all ETL jobs. ✅ Perform data analysis and validation for business reporting. ✅ Collaborate with Data Engineers, DBAs, and Business Analysts to ensure smooth data operations. Required Skills: • 1-4 years of hands-on experience with ETL tools (e.g., *Informatica, Talend, SSIS, Pentaho*, or equivalent). • Proficiency in SQL and experience working with RDBMS (e.g., SQL Server, Oracle, MySQL, PostgreSQL). • Good understanding of data warehousing concepts and data modeling. • Experience in handling large datasets and performance tuning of ETL jobs. • Ability to work in Agile environments and participate in code reviews. • Ability to learn and work with open-source languages like Node.js and AngularJS. Preferred Skills (Good to Have): • Experience with cloud ETL solutions (AWS Glue, Azure Data Factory, GCP Dataflow). • Exposure to big data ecosystems (Hadoop, Spark). Qualifications: 🎓 Bachelor’s degree in Computer Science, Engineering, Information Technology, or related field. Show more Show less
Posted 1 month ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Title: Data Engineer Location: Hyderabad, Kochi, Trivandrum Experience Required: 10-19 Yrs Skills: Primary - Scala, Pyspark, Python / Secondary - ETL, SQL, Azure Role Proficiency The role demands expertise in building robust, scalable data pipelines that support ingestion, wrangling, transformation, and integration of data from multiple sources. The ideal candidate should have hands-on experience with ETL tools (e.g., Informatica, AWS Glue, Databricks, GCP DataProc), and strong programming skills in Python, PySpark, SQL, and optionally Scala. Proficiency across various data domains and familiarity with modern data warehouse and lakehouse architectures (Snowflake, BigQuery, Delta Lake, Lakehouse) is essential. A solid understanding of DevOps and infrastructure cost optimization is required. Key Responsibilities & Outcomes Technical Development Develop high-performance data pipelines and applications. Optimize development using design patterns and reusable solutions. Create and tune code using best practices for performance and scalability. Develop schemas, data models, and data storage solutions (SQL/NoSQL/Delta Lake). Perform debugging, testing, and validation to ensure solution quality. Documentation & Design Produce high-level and low-level design (HLD, LLD, SAD) and architecture documentation. Prepare infra costing, source-target mappings, and business requirement documentation. Contribute to and govern documentation standards/templates/checklists. Project & Team Management Support Project Manager in planning, delivery, and sprint execution. Estimate effort and provide input on resource planning. Lead and mentor junior team members, define goals, and monitor progress. Monitor and manage defect lifecycle including RCA and proactive quality improvements. Customer Interaction Gather and clarify requirements with customers and architects. Present design alternatives and conduct product demos. Ensure alignment with customer expectations and solution architecture. Testing & Release Design and review unit/integration test cases and execution strategies. Provide support during system/integration testing and UAT. Oversee and execute release cycles and configurations. Knowledge Management & Compliance Maintain compliance with configuration management plans. Contribute to internal knowledge repositories and reusable assets. Stay updated and certified on relevant technologies/domains. Measures of Success (KPIs) Adherence to engineering processes and delivery schedules. Number of post-delivery defects and non-compliance issues. Reduction in recurring defects and faster resolution of production bugs. Timeliness in detecting, responding to, and resolving pipeline/data issues. Improvements in pipeline efficiency (e.g., runtime, resource utilization). Team engagement and upskilling; completion of relevant certifications. Zero or minimal data security/compliance breaches. Expected Deliverables Code High-quality data transformation scripts and pipelines. Peer-reviewed, optimized, and reusable code. Documentation Design documents, technical specifications, test plans, and infra cost estimations. Configuration & Testing Configuration management plans and test execution results. Knowledge Sharing Contributions to SharePoint, internal wikis, client university platforms. Skill Requirements Mandatory Technical Skills Languages : Python, PySpark, Scala ETL Tools : Apache Airflow, Talend, Informatica, AWS Glue, Databricks, DataProc Cloud Platforms : AWS, GCP, Azure (esp. BigQuery, DataFlow, ADF, ADLS) Data Warehousing : Snowflake, BigQuery, Delta Lake, Lakehouse architecture Performance Tuning : For large-scale distributed systems and pipelines Additional Skills Experience in data model design and optimization. Good understanding of data schemas, window functions, and data partitioning strategies. Awareness of data governance, security standards, and compliance. Familiarity with DevOps, CI/CD, infrastructure cost estimation. Certifications (Preferred) Cloud certifications (e.g., AWS Data Analytics, GCP Data Engineer) Informatica or Databricks certification Domain-specific certifications based on project/client need Soft Skills Strong analytical and problem-solving capabilities Excellent communication and documentation skills Ability to work independently and collaboratively in cross-functional teams Stakeholder management and customer interaction Show more Show less
Posted 1 month ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
WHAT YOU DO AT AMD CHANGES EVERYTHING We care deeply about transforming lives with AMD technology to enrich our industry, our communities, and the world. Our mission is to build great products that accelerate next-generation computing experiences - the building blocks for the data center, artificial intelligence, PCs, gaming and embedded. Underpinning our mission is the AMD culture. We push the limits of innovation to solve the world’s most important challenges. We strive for execution excellence while being direct, humble, collaborative, and inclusive of diverse perspectives. AMD together we advance_ The Role The software Technical Marketing team is looking for someone to drive features, methodology and collateral around the software development flow for machine learning applications. The Person We are looking for a highly motivated and skilled Machine Learning and AI Technical Marketing Engineer with experience in system design, as well as FPGA and Embedded software Tools, to scale the teams’ ability to deliver customer focused solutions for current and next generation AECG Platforms. Candidates should have a desire to deliver solution that enable customers to accomplish their goals, be self-motivated, possess the ability to work well within a distributed team environment and have the ability to easily communicate technical concepts in simple terms. Key Responsibilities Collaborate with market segment architects and business leads to create customer focused machine learning and signal processing applications collateral to address the complex needs of customers in Aerospace and Defense, Automotive, Wired and Wireless Networks, Test and Measurement, Medical, Industrial and Vision markets, and Audio Video Broadcasting. Work closely with Vivado, Vitis and Vitis AI Tools, IP, system software, and boards marketing and marketing teams to support customers and drive deliverables as part of the overall solution plan for existing and next generation embedded silicon devices. Interface with product marketing and engineering teams to prioritize and align solution deliverables during release planning processes. Support customers using the Vitis AI and other tools for Machine Learning applications. Present solution progress updates to executive and deliver solution, silicon, and customer application presentations to internal marketing and engineering teams. Drive solution deliverables to support machine learning applications in FPGA and SOC product families. Preferred Experience Tenured industry experience with Machine Learning programming, optimization and debug techniques. Proficient industry experience with Embedded software programming, optimization and debug techniques. Ability to understand a broad set of applications from traditional FPGA centric applications such as Wired and Wireless Communications, Aerospace and Defense and general Digital Signal Processing and to emerging applications in Artificial Intelligence, Machine learning, Vision Processing and Autonomous Driving. Have experience with FPGA and Adaptive SoC products and exposure to Vivado, Vitis and Vitis AI design tools. Have experience with system level analysis, such as interface and memory bandwidth, as well as compute and dataflow analysis. Have experience with some or all of the following ML networks for embedded applications: CNNs, RNNs, MPLs, GNNs and Transformer Ability to break down large complex problems into manageable deliverables and be able to manage and prioritize requirements from many stakeholders. Thrive in a fast-paced environment at the forefront of new technology and invention. Beneficial to have Project Management experience, excellent organizational skills, and a process-oriented mindset. Exp : B.Tech / M.Tech with 15+Yrs of exp Benefits offered are described: AMD benefits at a glance. AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law. We encourage applications from all qualified candidates and will accommodate applicants’ needs under the respective laws throughout all stages of the recruitment and selection process. Show more Show less
Posted 1 month ago
5.0 years
0 Lacs
Itanagar, Arunachal Pradesh, India
On-site
Job Description It is a new initiative for consumers under the Forbes Marketplace umbrella that provides journalist- and expert-written insights, news and reviews on all things personal finance, health, business, and everyday life decisions. We do this by providing consumers with the knowledge and research they need to make informed decisions they can feel confident in, so they can get back to doing the things they care about most. Summary Database Engineer/ Developer - Core Skills Proficiency in SQL and relational database management systems like PostgreSQL or MySQL, along with database design principles. Strong familiarity with Python for scripting and data manipulation tasks, with additional knowledge of Python OOP being advantageous. A good understanding of data security measures and compliance is also required. Demonstrated problem-solving skills with a focus on optimizing database performance and automating data import processes, and knowledge of cloud-based databases like AWS RDS and Google BigQuery. Min 5 years of experience. JD Database Engineer - Data Research Engineering Position Overview At Marketplace, our mission is to help readers turn their aspirations into reality. We arm people with trusted advice and guidance, so they can make informed decisions they feel confident in and get back to doing the things they care about most. We are an experienced team of industry experts dedicated to helping readers make smart decisions and choose the right products with ease. Marketplace boasts decades of experience across dozens of geographies and teams, including Content, SEO, Business Intelligence, Finance, HR, Marketing, Production, Technology and Sales. The team brings rich industry knowledge to Marketplace’s global coverage of consumer credit, debt, health, home improvement, banking, investing, credit cards, small business, education, insurance, loans, real estate and travel. The Data Research Engineering Team is a brand new team with the purpose of managing data from acquisition to presentation, collaborating with other teams while also operating independently. Their responsibilities include acquiring and integrating data, processing and transforming it, managing databases, ensuring data quality, visualizing data, automating processes, working with relevant technologies, and ensuring data governance and compliance. They play a crucial role in enabling data-driven decision-making and meeting the organization's data needs. A typical day in the life of a Database Engineer/Developer will involve designing, developing, and maintaining a robust and secure database infrastructure to efficiently manage company data. They collaborate with cross-functional teams to understand data requirements and migrate data from spreadsheets or other sources to relational databases or cloud-based solutions like Google BigQuery and AWS. They develop import workflows and scripts to automate data import processes, optimize database performance, ensure data integrity, and implement data security measures. Their creativity in problem-solving and continuous learning mindset contribute to improving data engineering processes. Proficiency in SQL, database design principles, and familiarity with Python programming are key qualifications for this role. Responsibilities Design, develop, and maintain the database infrastructure to store and manage company data efficiently and securely. Work with databases of varying scales, including small-scale databases, and databases involving big data processing. Work on data security and compliance, by implementing access controls, encryption, and compliance standards. Collaborate with cross-functional teams to understand data requirements and support the design of the database architecture. Migrate data from spreadsheets or other sources to a relational database system (e.g., PostgreSQL, MySQL) or cloud-based solutions like Google BigQuery. Develop import workflows and scripts to automate the data import process and ensure data accuracy and consistency. Optimize database performance by analyzing query execution plans, implementing indexing strategies, and improving data retrieval and storage mechanisms. Work with the team to ensure data integrity and enforce data quality standards, including data validation rules, constraints, and referential integrity. Monitor database health and identify and resolve issues. Collaborate with the full-stack web developer in the team to support the implementation of efficient data access and retrieval mechanisms. Implement data security measures to protect sensitive information and comply with relevant regulations. Demonstrate creativity in problem-solving and contribute ideas for improving data engineering processes and workflows. Embrace a learning mindset, staying updated with emerging database technologies, tools, and best practices. Explore third-party technologies as alternatives to legacy approaches for efficient data pipelines. Familiarize yourself with tools and technologies used in the team's workflow, such as Knime for data integration and analysis. Use Python for tasks such as data manipulation, automation, and scripting. Collaborate with the Data Research Engineer to estimate development efforts and meet project deadlines. Assume accountability for achieving development milestones. Prioritize tasks to ensure timely delivery, in a fast-paced environment with rapidly changing priorities. Collaborate with and assist fellow members of the Data Research Engineering Team as required. Perform tasks with precision and build reliable systems. Leverage online resources effectively like StackOverflow, ChatGPT, Bard, etc., while considering their capabilities and limitations. Skills And Experience Bachelor's degree in Computer Science, Information Systems, or a related field is desirable but not essential. Experience with data warehousing concepts and tools (e.g., Snowflake, Redshift) to support advanced analytics and reporting, aligning with the team’s data presentation goals. Skills in working with APIs for data ingestion or connecting third-party systems, which could streamline data acquisition processes. Proficiency with tools like Prometheus, Grafana, or ELK Stack for real-time database monitoring and health checks beyond basic troubleshooting. Familiarity with continuous integration/continuous deployment (CI/CD) tools (e.g., Jenkins, GitHub Actions). Deeper expertise in cloud platforms (e.g., AWS Lambda, GCP Dataflow) for serverless data processing or orchestration. Knowledge of database development and administration concepts, especially with relational databases like PostgreSQL and MySQL. Knowledge of Python programming, including data manipulation, automation, and object-oriented programming (OOP), with experience in modules such as Pandas, SQLAlchemy, gspread, PyDrive, and PySpark. Knowledge of SQL and understanding of database design principles, normalization, and indexing. Knowledge of data migration, ETL (Extract, Transform, Load) processes, or integrating data from various sources. Knowledge of cloud-based databases, such as AWS RDS and Google BigQuery. Eagerness to develop import workflows and scripts to automate data import processes. Knowledge of data security best practices, including access controls, encryption, and compliance standards. Strong problem-solving and analytical skills with attention to detail. Creative and critical thinking. Strong willingness to learn and expand knowledge in data engineering. Familiarity with Agile development methodologies is a plus. Experience with version control systems, such as Git, for collaborative development. Ability to thrive in a fast-paced environment with rapidly changing priorities. Ability to work collaboratively in a team environment. Good and effective communication skills. Comfortable with autonomy and ability to work independently. Show more Show less
Posted 1 month ago
5.0 years
0 Lacs
India
Remote
Job Title: Senior Data Engineer Experience: 5+ Years Location: Remote Contract Duration: Short Term Work Time: IST Shift Job Description We are seeking a skilled and experienced Senior Data Engineer to develop scalable and optimized data pipelines using the Databricks Lakehouse platform. The role requires proficiency in Apache Spark, PySpark, cloud data services (AWS, Azure, GCP), and solid programming knowledge in Python and Java. The engineer will collaborate with cross-functional teams to design and deliver high-performing data solutions. Responsibilities Data Pipeline Development Build efficient ETL/ELT workflows using Databricks and Spark for batch and streaming data Utilize Delta Lake and Unity Catalog for structured data management Optimize Spark jobs using tuning techniques such as caching, partitioning, and serialization Cloud-Based Implementation Develop and deploy data workflows on AWS (S3, EMR, Glue), Azure (ADLS, ADF, Synapse), and/or GCP (GCS, Dataflow, BigQuery) Manage and optimize data storage, access control, and orchestration using native cloud tools Implement data ingestion and querying with Databricks Auto Loader and SQL Warehousing Programming and Automation Write clean, reusable, and production-grade code in Python and Java Automate workflows using orchestration tools like Airflow, ADF, or Cloud Composer Implement testing, logging, and monitoring mechanisms Collaboration and Support Work closely with data analysts, scientists, and business teams to meet data requirements Support and troubleshoot production workflows Document solutions, maintain version control, and follow Agile/Scrum methodologies Required Skills Technical Skills Databricks: Experience with notebooks, cluster management, Delta Lake, Unity Catalog, and job orchestration Spark: Proficient in transformations, joins, window functions, and tuning Programming: Strong in PySpark and Java, with data validation and error handling expertise Cloud: Experience with AWS, Azure, or GCP data services and security frameworks Tools: Familiarity with Git, CI/CD, Docker (preferred), and data monitoring tools Experience 5–8 years in data engineering or backend development Minimum 1–2 years of hands-on experience with Databricks and Spark Experience with large-scale data migration, processing, or analytics projects Certifications (Optional but Preferred) Databricks Certified Data Engineer Associate Working Conditions Full-time remote work with availability during IST hours Occasional on-site presence may be required during client visits No regular travel required On-call support expected during deployment phases Show more Show less
Posted 1 month ago
5.0 years
0 Lacs
India
Remote
Client Type: US Client Location: Remote About the Role We’re creating a new certification: Google AI Ecosystem Architect (Gemini & DeepMind) - Subject Matter Expert . This course is designed for technical learners who want to understand and apply the capabilities of Google’s Gemini models and DeepMind technologies to build powerful, multimodal AI applications. We’re looking for a Subject Matter Expert (SME) who can help shape this course from the ground up. You’ll work closely with a team of learning experience designers, writers, and other collaborators to ensure the course is technically accurate, industry-relevant, and instructionally sound. Responsibilities As the SME, you’ll partner with learning experience designers and content developers to: Translate real-world Gemini and DeepMind applications into accessible, hands-on learning for technical professionals. Guide the creation of labs and projects that allow learners to build pipelines for image-text fusion, deploy Gemini APIs, and experiment with DeepMind’s reinforcement learning libraries. Contribute technical depth across activities, from high-level course structure down to example code, diagrams, voiceover scripts, and data pipelines. Ensure all content reflects current, accurate usage of Google’s multimodal tools and services. Be available during U.S. business hours to support project milestones, reviews, and content feedback. This role is an excellent fit for professionals with deep experience in AI/ML, Google Cloud, and a strong familiarity with multimodal systems and the DeepMind ecosystem. Essential Tools & Platforms A successful SME in this role will demonstrate fluency and hands-on experience with the following: Google Cloud Platform (GCP) Vertex AI (particularly Gemini integration, model tuning, and multimodal deployment) Cloud Functions, Cloud Run (for inference endpoints) BigQuery and Cloud Storage (for handling large image-text datasets) AI Platform Notebooks or Colab Pro Google DeepMind Technologies JAX and Haiku (for neural network modeling and research-grade experimentation) DeepMind Control Suite or DeepMind Lab (for reinforcement learning demonstrations) RLax or TF-Agents (for building and modifying RL pipelines) AI/ML & Multimodal Tooling Gemini APIs and SDKs (image-text fusion, prompt engineering, output formatting) TensorFlow 2.x and PyTorch (for model interoperability) Label Studio, Cloud Vision API (for annotation and image-text preprocessing) Data Science & MLOps DVC or MLflow (for dataset and model versioning) Apache Beam or Dataflow (for processing multimodal input streams) TensorBoard or Weights & Biases (for visualization) Content Authoring & Collaboration GitHub or Cloud Source Repositories Google Docs, Sheets, Slides Screen recording tools like Loom or OBS Studio Required skills and experience: Demonstrated hands-on experience building, deploying, and maintaining sophisticated AI powered applications using Gemini APIs/SDKs within the Google Cloud ecosystem, especially in Firebase Studio and VS Code. Proficiency in designing and implementing agent-like application patterns, including multi-turn conversational flows, state management, and complex prompting strategies (e.g., Chain-of Thought, few-shot, zero-shot). Experience integrating Gemini with Google Cloud services (Firestore, Cloud Functions, App Hosting) and external APIs for robust, production-ready solutions. Proven ability to engineer applications that process, integrate, and generate content across multiple modalities (text, images, audio, video, code) using Gemini’s native multimodal capabilities. Skilled in building and orchestrating pipelines for multimodal data handling, synchronization, and complex interaction patterns within application logic. Experience designing and implementing production-grade RAG systems, including integration with vector databases (e.g., Pinecone, ChromaDB) and engineering data pipelines for indexing and retrieval. Ability to manage agent state, memory, and persistence for multi-turn and long-running interactions. Proficiency leveraging AI-assisted coding features in Firebase Studio (chat, inline code, command execution) and using App Prototyping agents or frameworks like Genkit for rapid prototyping and structuring agentic logic. Strong command of modern development workflows, including Git/GitHub, code reviews, and collaborative development practices. Experience designing scalable, fault-tolerant deployment architectures for multimodal and agentic AI applications using Firebase App Hosting, Cloud Run, or similar serverless/cloud platforms. Advanced MLOps skills, including monitoring, logging, alerting, and versioning for generative AI systems and agents. Deep understanding of security best practices: prompt injection mitigation (across modalities), secure API key management, authentication/authorization, and data privacy. Demonstrated ability to engineer for responsible AI, including bias detection, fairness, transparency, and implementation of safety mechanisms in agentic and multimodal applications. Experience addressing ethical challenges in the deployment and operation of advanced AI systems. Proven success designing, reviewing, and delivering advanced, project-based curriculum and hands-on labs for experienced software developers and engineers. Ability to translate complex engineering concepts (RAG, multimodal integration, agentic patterns, MLOps, security, responsible AI) into clear, actionable learning materials and real world projects. 5+ years of professional experience in AI-powered application development, with a focus on generative and multimodal AI. Strong programming skills in Python and JavaScript/TypeScript; experience with modern frameworks and cloud-native development. Bachelor’s or Master’s degree in Computer Science, Data Engineering, AI, or a related technical field. Ability to explain advanced technical concepts (e.g., fusion transformers, multimodal embeddings, RAG workflows) to learners in an accessible way. Strong programming experience in Python and experience deploying machine learning pipelines Ability to work independently, take ownership of deliverables, and collaborate closely with designers and project managers Preferred: Experience with Google DeepMind tools (JAX, Haiku, RLax, DeepMind Control Suite/Lab) and reinforcement learning pipelines. Familiarity with open data formats (Delta, Parquet, Iceberg) and scalable data engineering practices. Prior contributions to open-source AI projects or technical community engagement. Show more Show less
Posted 1 month ago
0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Skills: Data Engineer, Spark, Scala, Python, Onpremise, Cloudera, Snowflake, kafka, Overview Of The Company Jio Platforms Ltd. is a revolutionary Indian multinational tech company, often referred to as India's biggest startup, headquartered in Mumbai. Launched in 2019, it's the powerhouse behind Jio, India's largest mobile network with over 400 million users. But Jio Platforms is more than just telecom. It's a comprehensive digital ecosystem, developing cutting-edge solutions across media, entertainment, and enterprise services through popular brands like JioMart, JioFiber, and JioSaavn. Join us at Jio Platforms and be part of a fast-paced, dynamic environment at the forefront of India's digital transformation. Collaborate with brilliant minds to develop next-gen solutions that empower millions and revolutionize industries. Team Overview The Data Platforms Team is the launchpad for a data-driven future, empowering the Reliance Group of Companies. We're a passionate group of experts architecting an enterprise-scale data mesh to unlock the power of big data, generative AI, and ML modelling across various domains. We don't just manage data we transform it into intelligent actions that fuel strategic decision-making. Imagine crafting a platform that automates data flow, fuels intelligent insights, and empowers the organization that's what we do. Join our collaborative and innovative team, and be a part of shaping the future of data for India's biggest digital revolution! About the role. Title : Senior Data Engineer Location : Mumbai Responsibilities End-to-End Data Pipeline Development: Design, build, optimize, and maintain robust data pipelines across cloud, on-premises, or hybrid environments, ensuring performance, scalability, and seamless data flow. Reusable Components & Frameworks: Develop reusable data pipeline components and contribute to the team's data pipeline framework evolution. Data Architecture & Solutions: Contribute to data architecture design, applying data modelling, storage, and retrieval expertise. Data Governance & Automation: Champion data integrity, security, and efficiency through metadata management, automation, and data governance best practices. Collaborative Problem Solving: Partner with stakeholders, data teams, and engineers to define requirements, troubleshoot, optimize, and deliver data-driven insights. Mentorship & Knowledge Transfer: Guide and mentor junior data engineers, fostering knowledge sharing and professional growth. Qualification Details Education: Bachelor's degree or higher in Computer Science, Data Science, Engineering, or a related technical field. Core Programming: Excellent command of a primary data engineering language (Scala, Python, or Java) with a strong foundation in OOPS and functional programming concepts. Big Data Technologies: Hands-on experience with data processing frameworks (e.g., Hadoop, Spark, Apache Hive, NiFi, Ozone, Kudu), ideally including streaming technologies (Kafka, Spark Streaming, Flink, etc.). Database Expertise: Excellent querying skills (SQL) and strong understanding of relational databases (e.g., MySQL, PostgreSQL). Experience with NoSQL databases (e.g., MongoDB, Cassandra) is a plus. End-to-End Pipelines: Demonstrated experience in implementing, optimizing, and maintaining complete data pipelines, integrating varied sources and sinks including streaming real-time data. Cloud Expertise: Knowledge of Cloud Technologies like Azure HDInsights, Synapse, EventHub and GCP DataProc, Dataflow, BigQuery. CI/CD Expertise: Experience with CI/CD methodologies and tools, including strong Linux and shell scripting skills for automation. Desired Skills & Attributes Problem-Solving & Troubleshooting: Proven ability to analyze and solve complex data problems, troubleshoot data pipeline issues effectively. Communication & Collaboration: Excellent communication skills, both written and verbal, with the ability to collaborate across teams (data scientists, engineers, stakeholders). Continuous Learning & Adaptability: A demonstrated passion for staying up-to-date with emerging data technologies and a willingness to adapt to new tools. Show more Show less
Posted 1 month ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Equifax is where you can power your possible. If you want to achieve your true potential, chart new paths, develop new skills, collaborate with bright minds, and make a meaningful impact, we want to hear from you. Equifax is seeking creative, high-energy and driven software engineers with hands-on development skills to work on a variety of meaningful projects. Our software engineering positions provide you the opportunity to join a team of talented engineers working with leading-edge technology. You are ideal for this position if you are a forward-thinking, committed, and enthusiastic software engineer who is passionate about technology. What You’ll Do Design, develop, and operate high scale applications across the full engineering stack. Design, develop, test, deploy, maintain, and improve software. Apply modern software development practices (serverless computing, microservices architecture, CI/CD, infrastructure-as-code, etc.) Work across teams to integrate our systems with existing internal systems, Data Fabric, CSA Toolset. Participate in technology roadmap and architecture discussions to turn business requirements and vision into reality. Participate in a tight-knit, globally distributed engineering team. Triage product or system issues and debug/track/resolve by analyzing the sources of issues and the impact on network, or service operations and quality. Research, create, and develop software applications to extend and improve on Equifax Solutions. Manage sole project priorities, deadlines, and deliverables. Collaborate on scalability issues involving access to data and information. Actively participate in Sprint planning, Sprint Retrospectives, and other team activity What Experience You Need Bachelor's degree or equivalent experience 5+ years of software engineering experience 5+ years experience writing, debugging, and troubleshooting code in mainstream Java, SpringBoot, TypeScript/JavaScript, HTML, CSS 5+ years experience with Cloud technology: GCP, AWS, or Azure 5+ years experience designing and developing cloud-native solutions 5+ years experience designing and developing microservices using Java, SpringBoot, GCP SDKs, GKE/Kubernetes 5+ years experience deploying and releasing software using Jenkins CI/CD pipelines, understand infrastructure-as-code concepts, Helm Charts, and Terraform constructs What could set you apart Self-starter that identifies/responds to priority shifts with minimal supervision. Experience designing and developing big data processing solutions using Dataflow/Apache Beam, Bigtable, BigQuery, PubSub, GCS, Composer/Airflow, and others UI development (e.g. HTML, JavaScript, Angular and Bootstrap) Experience with backend technologies such as JAVA/J2EE, SpringBoot, SOA and Microservices Source code control management systems (e.g. SVN/Git, Github) and build tools like Maven & Gradle. Agile environments (e.g. Scrum, XP) Relational databases (e.g. SQL Server, MySQL) Atlassian tooling (e.g. JIRA, Confluence, and Github) Developing with modern JDK (v1.7+) Automated Testing: JUnit, Selenium, LoadRunner, SoapUI Cloud Certification strongly preferred We offer a hybrid work setting, comprehensive compensation and healthcare packages, attractive paid time off, and organizational growth potential through our online learning platform with guided career tracks. Are you ready to power your possible? Apply today, and get started on a path toward an exciting new career at Equifax, where you can make a difference! Who is Equifax? At Equifax, we believe knowledge drives progress. As a global data, analytics and technology company, we play an essential role in the global economy by helping employers, employees, financial institutions and government agencies make critical decisions with greater confidence. We work to help create seamless and positive experiences during life’s pivotal moments: applying for jobs or a mortgage, financing an education or buying a car. Our impact is real and to accomplish our goals we focus on nurturing our people for career advancement and their learning and development, supporting our next generation of leaders, maintaining an inclusive and diverse work environment, and regularly engaging and recognizing our employees. Regardless of location or role, the individual and collective work of our employees makes a difference and we are looking for talented team players to join us as we help people live their financial best. Equifax is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran. Show more Show less
Posted 1 month ago
10.0 years
0 Lacs
Coimbatore, Tamil Nadu, India
Remote
About The Job Position Name - Senior Data & AI/ML Engineer – GCP Specialization Lead Minimum Experience - 10+ years Expected Date of Joining - Immediate Primary Skill GCP Services: BigQuery, Dataflow, Pub/Sub, Vertex AI ML Engineering: End-to-end ML pipelines using Vertex AI / Kubeflow Programming: Python & SQL MLOps: CI/CD for ML, Model deployment & monitoring Infrastructure-as-Code: Terraform Data Engineering: ETL/ELT, real-time & batch pipelines AI/ML Tools: TensorFlow, scikit-learn, XGBoos Secondary Skills GCP Certifications: Professional Data Engineer or ML Engineer Data Tools: Looker, Dataform, Data Catalog AI Governance: Model explainability, privacy, compliance (e.g., GDPR, fairness) GCP Partner Experience: Prior involvement in specialization journey or partner enablement Work Location - Remote What Makes Techjays An Inspiring Place To Work At Techjays, we are driving the future of artificial intelligence with a bold mission to empower businesses worldwide by helping them build AI solutions that transform industries. As an established leader in the AI space, we combine deep expertise with a collaborative, agile approach to deliver impactful technology that drives meaningful change. Our global team consists of professionals who have honed their skills at leading companies such as Google, Akamai, NetApp, ADP, Cognizant Consulting, and Capgemini. With engineering teams across the globe, we deliver tailored AI software and services to clients ranging from startups to large-scale enterprises. Be part of a company that’s pushing the boundaries of digital transformation. At Techjays, you’ll work on exciting projects that redefine industries, innovate with the latest technologies, and contribute to solutions that make a real-world impact. Join us on our journey to shape the future with AI. We are seeking a Senior Data & AI/ML Engineer with deep expertise in GCP, who will not only build intelligent and scalable data solutions but also champion our internal capability building and partner-level excellence. This is a high-impact role for a seasoned engineer who thrives in designing GCP-native AI/ML-enabled data platforms. You’ll play a dual role as a hands-on technical lead and a strategic enabler, helping drive our Google Cloud Data & AI/ML specialization track forward through successful implementations, reusable assets, and internal skill development. Preferred Qualification GCP Professional Certifications: Data Engineer or Machine Learning Engineer. Experience contributing to a GCP Partner specialization journey. Familiarity with Looker, Data Catalog, Dataform, or other GCP data ecosystem tools. Knowledge of data privacy, model explainability, and AI governance is a plus. Work Location: Remote Key Responsibilities Data & AI/ML Architecture Design and implement data architectures for real-time and batch pipelines, leveraging GCP services such as BigQuery, Dataflow, Dataproc, Pub/Sub, Vertex AI, and Cloud Storage. Lead the development of ML pipelines, from feature engineering to model training and deployment using Vertex AI, AI Platform, and Kubeflow Pipelines. Collaborate with data scientists to operationalize ML models and support MLOps practices using Cloud Functions, CI/CD, and Model Registry. Define and implement data governance, lineage, monitoring, and quality frameworks. Google Cloud Partner Enablement Build and document GCP-native solutions and architectures that can be used for case studies and specialization submissions. Lead client-facing PoCs or MVPs to showcase AI/ML capabilities using GCP. Contribute to building repeatable solution accelerators in Data & AI/ML. Work with the leadership team to align with Google Cloud Partner Program metrics. Team Development Mentor engineers and data scientists toward achieving GCP certifications, especially in Data Engineering and Machine Learning. Organize and lead internal GCP AI/ML enablement sessions. Represent the company in Google partner ecosystem events, tech talks, and joint GTM engagements. What We Offer Best-in-class packages. Paid holidays and flexible time-off policies. Casual dress code and a flexible working environment. Opportunities for professional development in an engaging, fast-paced environment. Medical insurance covering self and family up to 4 lakhs per person. Diverse and multicultural work environment. Be part of an innovation-driven culture with ample support and resources to succeed. Show more Show less
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39815 Jobs | Dublin
Wipro
19317 Jobs | Bengaluru
Accenture in India
15105 Jobs | Dublin 2
EY
14860 Jobs | London
Uplers
11139 Jobs | Ahmedabad
Amazon
10431 Jobs | Seattle,WA
IBM
9214 Jobs | Armonk
Oracle
9174 Jobs | Redwood City
Accenture services Pvt Ltd
7676 Jobs |
Capgemini
7672 Jobs | Paris,France