PaddleOCR-EAST

目錄

  • EAST
  • Abstract
  • Train
    • PreProcess
    • Architecture
      • Backbone
      • Neck
      • Head
    • Loss
      • Dice Loss
      • SmoothL1 Loss
  • Infer
    • PostProcess
EAST
寫在前面:基于PaddleOCR代碼庫對其中所涉及到的算法進行代碼簡讀,如果有必要可能會先研讀一下原論文 。
Abstract
  • 論文鏈接:arxiv
  • 應用場景:文本檢測
  • 代碼配置文件:configs/det/det_r50_vd_east.yml
TrainPreProcessclass EASTProcessTrain(object):def __init__(self,image_shape=[512, 512],background_ratio=0.125,min_crop_side_ratio=0.1,min_text_size=10,**kwargs):self.input_size = image_shape[1]self.random_scale = np.array([0.5, 1, 2.0, 3.0])self.background_ratio = background_ratioself.min_crop_side_ratio = min_crop_side_ratioself.min_text_size = min_text_size...def __call__(self, data):im = data['image']text_polys = data['polys']text_tags = data['ignore_tags']if im is None:return Noneif text_polys.shape[0] == 0:return None#add rotate casesif np.random.rand() < 0.5:# 旋轉圖片和文本框(90,180 , 270)im, text_polys = self.rotate_im_poly(im, text_polys)h, w, _ = im.shape# 限制文本框坐標到有效范圍內、檢查文本框的有效性(基于文本框的面積)、以及點的順序是否是順時針text_polys, text_tags = self.check_and_validate_polys(text_polys,text_tags, h, w)if text_polys.shape[0] == 0:return None# 隨機縮放圖片以及文本框rd_scale = np.random.choice(self.random_scale)im = cv2.resize(im, dsize=None, fx=rd_scale, fy=rd_scale)text_polys *= rd_scaleif np.random.rand() < self.background_ratio:# 只切純背景圖,如果有文本框會返回Noneouts = self.crop_background_infor(im, text_polys, text_tags)else:"""隨機切圖并以及crop圖所包含的文本框,并基于縮小的文本框生成了label map:- label_map: shape=[h,w],得分圖,有文本的地方是1,其余地方為0- geo_map: shape=[h,w,9] 。前8個通道為縮小文本框內的像素到真實文本框的水平以及垂直距離 , 最后一個通道用來做loss歸一化 , 其值為每個框最短邊長的倒數- training_mask: shape=[h,w],使無效文本框不參與訓練,有效的地方為1,無效的地方為0"""outs = self.crop_foreground_infor(im, text_polys, text_tags)if outs is None:return Noneim, score_map, geo_map, training_mask = outs# 產生最終降采樣的score map,shape=[1,h//4,w//4]score_map = score_map[np.newaxis, ::4, ::4].astype(np.float32)# 產生最終降采樣的gep map, shape=[9,h//4,w//4]geo_map = np.swapaxes(geo_map, 1, 2)geo_map = np.swapaxes(geo_map, 1, 0)geo_map = geo_map[:, ::4, ::4].astype(np.float32)# 產生最終降采樣的training mask,shape=[1,h//4,w//4]training_mask = training_mask[np.newaxis, ::4, ::4]training_mask = training_mask.astype(np.float32)data['image'] = im[0]data['score_map'] = score_mapdata['geo_map'] = geo_mapdata['training_mask'] = training_maskreturn dataArchitectureBackbone采用resnet50_vd,得到1/4、1/8、1/16以及1/32倍共計4張降采樣特征圖 。
Neck基于Unect decoder架構,完成自底向上的特征融合過程,從1/32特征圖逐步融合到1/4的特征圖,最終得到一張帶有多尺度信息的1/4特征圖 。
def forward(self, x):# x是存儲4張從backbone獲取的特征圖f = x[::-1]# 此時特征圖從小到大排列h = f[0]# [b,512,h/32,w/32]g = self.g0_deconv(h)# [b,128,h/16,w/16]h = paddle.concat([g, f[1]], axis=1)# [b,128+256,h/16,w/16]h = self.h1_conv(h)# [b,128,h/16,w/16]g = self.g1_deconv(h)# [b,128,h/8,w/8]h = paddle.concat([g, f[2]], axis=1)# [b,128+128,h/8,w/8]h = self.h2_conv(h)# [b,128,h/8,w/8]g = self.g2_deconv(h)# [b,128,h/4,w/4]h = paddle.concat([g, f[3]], axis=1)# [b,128+64,h/4,w/4]h = self.h3_conv(h)# [b,128,h/4,w/4]g = self.g3_conv(h)# [b,128,h/4,w/4]return gHead輸出分類頭和回歸頭(quad),部分參數共享 。
def forward(self, x, targets=None):# x是融合后的1/4特征圖,det_conv1和det_conv2用于進一步加強特征抽取f_det = self.det_conv1(x)# [b,128,h/4,w/4]f_det = self.det_conv2(f_det)# [b,64,h/4,w/4]# # [b,1,h/4,w/4] 用于前、背景分類,注意kernel_size=1f_score = self.score_conv(f_det)f_score = F.sigmoid(f_score)# 獲取相應得分# # [b,8,h/4,w/4] , 8的意義:dx1,dy1,dx2,dy2,dx3,dy3,dx4,dy4f_geo = self.geo_conv(f_det)# 回歸的range變為:[-800,800],那么最終獲取的文本框的最大邊長不會超過1600f_geo = (F.sigmoid(f_geo) - 0.5) * 2 * 800pred = {'f_score': f_score, 'f_geo': f_geo}return predLoss分類采用dice_loss,回歸采用smooth_l1_loss 。
class EASTLoss(nn.Layer):def __init__(self,eps=1e-6,**kwargs):super(EASTLoss, self).__init__()self.dice_loss = DiceLoss(eps=eps)def forward(self, predicts, labels):"""Params:predicts: {'f_score': 前景得分圖,'f_geo': 回歸圖}labels: [imgs, l_score, l_geo, l_mask]"""l_score, l_geo, l_mask = labels[1:]f_score = predicts['f_score']f_geo = predicts['f_geo']# 分類lossdice_loss = self.dice_loss(f_score, l_score, l_mask)channels = 8# channels+1的原因是最后一個圖對應了短邊的歸一化系數(后面會講),前8個代表相對偏移的label# [[b,1,h/4,w/4], ...]共9個l_geo_split = paddle.split(l_geo, num_or_sections=channels + 1, axis=1)# [[b,1,h/4,w/4], ...]共8個f_geo_split = paddle.split(f_geo, num_or_sections=channels, axis=1)smooth_l1 = 0for i in range(0, channels):geo_diff = l_geo_split[i] - f_geo_split[i]# diff=label-predabs_geo_diff = paddle.abs(geo_diff)# abs_diff# 計算abs_diff中小于1的且有文本的部分smooth_l1_sign = paddle.less_than(abs_geo_diff, l_score)smooth_l1_sign = paddle.cast(smooth_l1_sign, dtype='float32')# smoothl1 loss,大于1和小于1的兩個部分對應loss相加,只不過這里<1的部分沒乘0.5 , 問題不大in_loss = abs_geo_diff * abs_geo_diff * smooth_l1_sign + \(abs_geo_diff - 0.5) * (1.0 - smooth_l1_sign)# 用短邊*8做歸一化out_loss = l_geo_split[-1] / channels * in_loss * l_scoresmooth_l1 += out_loss# paddle.mean(smooth_l1)就可以了 , 前面都乘過了l_score,這里再乘沒卵用smooth_l1_loss = paddle.mean(smooth_l1 * l_score)# dice_loss權重為0.01,smooth_l1_loss權重為1dice_loss = dice_loss * 0.01total_loss = dice_loss + smooth_l1_losslosses = {"loss":total_loss, \"dice_loss":dice_loss,\"smooth_l1_loss":smooth_l1_loss}return losses

推薦閱讀